• Cygnus Professionals
  • $106,390.00 -159,250.00/year*
  • Flemington , NJ
  • Information Technology
  • Full-Time
  • 18 Brentwood Ct

position filled body

Loading some great jobs for you...






Key Responsibilities:



  • Contributes to the design, prototyping, and delivery of software solutions within the big data eco-system (primarily for the Hadoop platform)

  • Deep understanding of how Java interacts with various parts of the Hadoop platform, including technologies like HBase, and Hive

  • Is fluent in standard Hadoop service configurations and best practices to be able to advise customers when they have issues

  • Knowledgable of HDP services and how they interact, and the ability to investigate how issues in one service impact the rest of the platform

  • Improving data governance and quality increasing the reliability of our customer s data

  • Contributes to architecting the next generation of the platform

  • Identifies opportunities for improvement and presents recommendations to management

  • Involved in strategic planning discussions with technical and non-technical partners

  • Develops solutions and iterates quickly to continuously improve processes

  • Works independently on big data projects and/or serving as analytics SME to provide new or enhanced data to the business

  • Seeks out and evaluates emerging big data technologies and open-source packages

  • Experience with the following technologies: Hadoop, HDFS, YARN, Atlas, HIVE, Hbase, Spark and EMC Isilon (optional)

  • Proficiency in at least one of the following programming languages: Java, Python, SQL, R, Scala



Job Qualifications



  • Undergraduate degree in Computer Science, Mathematics, Engineering (or related field) or equivalent experience preferred

  • 2-3 years of experience preferred in a data integration, ETL ,and/or business intelligence and analytics related function

  • Ability to work with broad parameters in complex situations

  • Experience in managing and manipulating large, complex datasets

  • Expert high-level coding skills such as Scala and Python and/or other scripting languages. UNIX experience required.

  • At least 1 year of experience with big data and the Hadoop ecosystem (HDFS, Spark, Hive, Yarn) required

  • Some understanding and exposure to streaming toolsets such as Kafka, Spark streaming a plus.

  • Experience with Agile development methodologies and tools to iterate quickly on product changes, developing user stories and working through backlog (Continuous Integration and JIRA a plus)

  • Advanced oral and written communication skills



- provided by Dice
Associated topics: data analyst, data analytic, data architect, data engineer, data scientist, data warehousing, erp, sql, sybase, teradata

* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.

Launch your career - Upload your resume now!


Loading some great jobs for you...