Connecting...

W1siziisimnvbxbpbgvkx3rozw1lx2fzc2v0cy9yzwqty29tbwvyy2uvanbnl2jhbm5lci1kzwzhdwx0lwvulmpwzyjdxq

Hadoop Administrator/Developer

Location: Seattle, Washington Salary: US$100000 - US$150000 per annum
Sector: Retail and Distribution Type: Permanent
Reference #: PR/054118_1536584444

RED has a direct client in the greater Seattle, Washington with a Hadoop Administer/Developer. This is a fulltime permanent position with a growing international presence.

Tasks and responsibilities:

  • Partner with Infrastructure to identify server hardware, software and configurations necessary for optimally running big data workloads (i.e. Spark, Hive, Impala, etc.)
  • Plans and executes major platform software and operating system upgrades and maintenance across physical and virtualized environments.
  • Work with project teams to integrate Hadoop access points.
  • Strong focus on design, build, deployment, security, system hardening and securing services.
  • Designs and implements a toolset that simplifies provisioning and support of a large cluster environment.
  • Maintain current knowledge of industry trends and standards in a "Big Data" space/ecosystem.
  • Create and maintain detailed up-to-date technical documentation.
  • Proactively manage hadoop system resources to assure maximum system performance and appropriate additional capacity for peak periods and growth.
  • Review performance stats and query execution/explain plans, and recommend changes for tuning Hive/Impala queries.
  • Recommend security management best practices including the ongoing promotion of awareness on current threats, auditing of server logs and other security management processes, as well as following established security standards.
  • Provide support to the user community using incident and problem management tools, email, and voicemail.
  • Periodic off-hours work required including weekends and holidays. Must be able to provide 24 by 7 on-call support as necessary.

Required skills, abilities, and certifications

  • 3+ Years working in production hadoop and/or the Big Data Ecosystem
  • Deep knowledge of Hadoop components (i.e. HDFS, YARN, Zookeeper, Sqoop, Hive, Impala, Hue, Sentry, Spark, Kafka, flume, etc.)
  • Minimum of 3 years experience supporting products running on AIX, Linux or other UNIX variants.
  • Strong understanding of JVMs.
  • Knowledge of Relational Databases (DB2, Oracle, SQL Server, DB2 for iSeries, MySQL, Postgres, MariaDB)
  • Software Configuration Management tool experience (puppet preferred)
  • Version control experience (Unix, git, git preferred)
  • Proficient with operating system utilities as well as server monitoring and diagnostic tools to troubleshoot server software and network issues.
  • Deep knowledge of associated industry protocol standards such as: LDAP, DNS, TCP/IP, etc.
  • Strong understanding of Enterprise level services - Active Directory, PKI Infrastructure (Venafi).
  • Experience with Kerberos, cross-realm authentication and Kerberized services.
  • Security experience including SSL certificates,TLS, hardening, PEN tests, etc.
  • Understanding of message based architecture
  • The ideal candidate must have demonstrated flexibility.

Recommended skills, abilities, and certifications

  • Experience with Systems Management and Administration (applying fixes, loading the OS, creating Gold images, resolving system issues, working with vendors to resolve issues, etc.)
  • Experience with VMWare (vSphere), virtualized environments and/or cloud providers such as Microsoft Azure, AWS, etc.
  • Experience with Tivoli Storage Management and/or Commvault
  • Experience in System/Database Backup and Recovery
  • B.S. degree in Computer Science or equivalent formal training and experience.

Share this Job