Big Data Engineer

Employment Type

: Full-Time


: Miscellaneous

Loading some great jobs for you...

Job Description
  • As a Senior Engineer, you will be an integral member of our Artificial Intelligence and Analytics team responsible for the design and development of distributed data applications on Cloud
  • Partner with data analyst, product owners, and data scientists, to better understand requirements, finding bottlenecks, resolutions, etc
  • Design, develop and support Platform-as-a-Service (PaaS) frameworks, tools, Micro Services leveraging public cloud infrastructure
  • Be an SME for all things 'Cloud and Big Data' as well as mentor other team members
  • Design and develop different architectural models for our scalable data processing and data serving systems
  • Build data pipelines and ETL using heterogeneous sources
  • Build data ingestion from various source systems to Google Cloud using Kafka, Dataflow, Dataproc Streaming, etc
  • Responsible to ensure that the platform goes through Continuous Integration (CI) and Continuous Deployment (CD) with DevOps automation
  • Expands and grows data platform capabilities to tackle new data problems and challenges
  • Supports Big Data and batch/real-time analytical solutions using groundbreaking technologies like Apache Beam
  • Have the ability to research and assess open source technologies and components to recommend and integrate into the design and implementation
  • Degree in Bachelor of Science in Computer Science or equivalent
  • 7+ years of experience with the Hadoop ecosystem, Cloud and other Distributed Systems and Technologies
  • Strong hands-on experience in developing applications in more than one language stacks: Java, Python, Scala
  • Expert-level software development experience - practicing strong software development principles and best practices: Test-driven development, CI/CD, coding standards
  • Ability to dynamically adapt to conventional big data and Cloud frameworks and tools with the use-cases required by the project
  • Experience with building stream-processing systems using solutions such as Dataflow, spark-streaming, or Flink, etc
  • Experience in Google Cloud Technologies is a plus
  • Experience in other open-sources technologies like Elastic Search, Logstash, JanusGraph is a plus
  • Knowledge of design strategies for developing a scalable, resilient, always-on data lake
  • Some knowledge of agile(scrum) development methodology is a plus
  • Strong development/automation skills
  • Excellent interpersonal and teamwork skills
  • Can-do attitude on problem-solving, quality and ability to execute

Launch your career - Create your profile now!

Create your Profile

Loading some great jobs for you...