Big Data Hadoop Developer (Data Engineer)

Federal Reserve Bank, San Francisco in San Francisco, CA

  • Industry: Information Technology - Software Engineer, Developer, And Programmer
  • Type: Full Time
  • $126,470.00 - 187,710.00
position filled
Title: Big Data Hadoop Developer (Data Engineer) Location(s): San Francisco, CA Salt Lake City, Utah The Federal Reserve Bank of San Francisco is looking for a Big Data Hadoop Developer for a temporary assignment (one year) here in our San Francisco location. As a term employee of the Fed, you are salaried and benefited, and work directly for the Bank for a defined period of time. Position has potential to become full time/regular after one year or sooner. The Advanced Data and Analytics Capabilities team leads and develops solutions for various business lines in the system as well as National IT. We employ state of the art technologies that are part of the Hadoop ecosystem, which includes tools used for data integration, data modeling, and data analytics. You will have an opportunity to apply your critical thinking and technical skills across many disciplines. In this role, you will contribute to high quality technology solutions that address business needs by developing utilities for the platform or applications for the customer business lines and providing production support. You should have strong communication skills as you will work closely with other groups, including development and testing efforts of your assigned application components to ensure the successful delivery of the project. Essential Duties and Responsibilities: Develop code on common utilities in Big Data environments using Scala/Python/Java/Scripting etc. Provide end-to-end support for solution integration, including designing, developing, testing, deploying, and supporting solutions on Hadoop environment Build schedules, scripts and development of new mappings and workflows Build workflows covering source code development through go-live Build Run Books and troubleshooting guides for different types of workflows and Control-M jobs in the Big Data environment Test submitted software changes prior to production roll outs Develop, execute and document unit test plans and support application testing Assist in the deployment of new modules, upgrades and fixes to the production environment Validate deployment to staging and production environments Provide operational and production support for applications and utilities Tackle issues and participate in defect and incident root cause analyses Collaborate with Developers, DevOps, Release Management and Operations Maintain security in accordance with Bank security policies Participate in an Agile development environment by attending daily standups and sprint planning activities Create change management packages and implementation plans for migration to different environments Automate execution of batch applications using Control-M Assist in technical writing on Big Data components Assist in testing upgrades of Big Data environments Should be open to cross-training and assignments with other division groups Contribute to initiatives such as mining new data sources, developing data tools, evaluating data visualization software tools, or developing documentation Explore data to derive business insight Independently determine methods and procedures on new assignments, and may provide work direction to others Analysis of complex issues, situations and data utilizing your in-depth evaluation of variable factors for resolution Use your judgment and analytical skills in selecting methods, techniques and evaluation criteria for obtaining results Qualifications: Undergraduate degree in computer science, MIS, engineering, statistics, data science or related field At senior level, requires five or more years of relevant technical or business work experience; at Lead level, requires seven or more years of relevant technical or business work experience 3 years programming skills in Java or Python or Scala preferred Knowledge of HDFS data distribution and processing Understanding of Hive/Impala/Spark Knowledge of Hadoop ecosystem, machine learning algorithm/ text analytics Strong skills in programming and scripting on UNIX / Linux. (i.e. Python or Bash) Experience with CTRL-M, Cron and scheduling of batch jobs Experience with workflow processing on the Hadoop ecosystem including Oozie, NiFi, etc. Passion for technology and data, a critical thinker, problem solver and a self-starter Strong quantitative and analytical skills Strong attention to detail Ability to communicate effectively (both verbal and written) and work in a team environment Ability to balance multiple assignments and shift gears when new priorities arise Experience performing 24 by 7 Production Support on applications Familiar with Agile methodologies Ability to learn and document an existing system Strong analytical skills Must be a Those authorized to work in the United States without sponsorship are encouraged to apply. Nice to have: Working experience at Government or quasi-Government organizations Cloud experience and using big data technologies on the Cloud The Federal Reserve Bank of San Francisco believes in the diversity of our people, ideas, and experiences and are committed to building an inclusive culture that is representative of the communities we serve. The Federal Reserve Bank of San Francisco is an Equal Opportunity Employer. - provided by Dice
Associated topics: .net, application, backend, c c++, java, matlab, perl, programming, sde, software engineer

You may be interested in these similar jobs!
Hadoop Developer
Enterprise Solutions in Palo Alto, CA

Job Description Position- Hadoop DeveloperLocation- Palo Alto CAResponsibilities: Work with different application engineering teams to understand the…

Read More
Hadoop Developer/ Lead :: O'fallon-MO or Palo Alto-CA
Vedant Group in Palo Alto, CA

Spark, Scala, Hive and Hadoop as mandatory. This position is for Hadoop developer with Big data knowledge and relevant of 5-8years experience. Experi…

Read More
Big Data Developer
InfoObjects in San Mateo, CA

Big Data Developer 3-6 Months Contract Location: San Mateo, CA Technical Skills: Apache Spark, AWS Cloud, Big Data with Python and Java. Java and Pyt…

Read More
Big Data Engineer
InTone Networks in San Francisco, CA

Title : Big Data Developer Data Management Location: San Francisco, CA Duration : 06 Months Interview : Phone and Skype. Job Description:- NOTE :: Py…

Read More
Data Engineer
Talent Junction, LLC in Concord, CA

Role: Data Engineer - (I00197287) Locations: Chandler, AZ or Concord, CA of positions: 5 Job Description: Data model development and Model scoring Wo…

Read More
Hadoop developer
Enterprise Solutions in Palo Alto, CA

Job Description Position- Hadoop DeveloperLocation- Palo Alto CAResponsibilities: Work with different application engineering teams to understand the…

Read More
Hadoop Developer
Sri Tech Solutions in San Mateo, CA

JD: Responsibilities: Work with different application engineering teams to understand their requirements around data integration, Processing and cons…

Read More
242 : Big data developer
Softworld Technologies LLC in San Mateo, CA

Hi, My Client Looking for Big data developers for Foster City, CA. Please go through my Job description and Let me know if you have any Candidates fo…

Read More
Looking for Hadoop Developer
Sri Tech Solutions in Palo Alto, CA

Job Title: Hadoop Developer Location: Palo Alto, CA and Foster City, CA Duration: 12 Months+ JD: Responsibilities: Work with different application en…

Read More
Hadoop Developer
net2source in San Francisco, CA

Net2Source is a Global Workforce Solutions Company headquartered at NJ, USA with its branch offices in Asia Pacific Region. We are one of the fastest…

Read More