Return To Job Search
Director of Data Engineering
Missouri All
Applications/Web Development
Database Developer/Engineer
JN -012018-20707

Key Activities for the Architect include:

  • Defines and implements Data platform architectures and technical road maps that align with our client’s Data strategy…we are not currently needing big data but just data
  • Evolves the Enterprise Data Lake service and support of key projects implementing Data technologies
  • Manages the enterprise Hadoop (or similar open source technologies)and associated environments to ensure that the service delivery is cost effective and business Service Level Agreements (SLAs) uptime, performance and capacity are met
  • Research, select and defines Cloud technologies for structured and unstructured data such as Amazon S3, Redshift, Azure, Google Cloud Storage/Cloud Bigtable
  • Defines guidelines, standards, strategies, security policies and change management policies to support the Data platforms
  • Researches and evaluates technical solutions including various Hadoop distributions, NoSQL databases, data integration and analytical tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, etc.
  • Advises and supports project teams (project managers, architects, business analysts, and developers) on tools, technology, and methodology related to the design and development of Data solutions
  • Assess and Leverage legacy data connections/improvements to incorporate into modern Data Lake service
  • Maintains knowledge of market trends and developments in Hadoop related tools, analytics software, and related and emerging technologies like cloud hosting services, Agile/DevOps  development processes to provide, recommend, and deliver best practice solutions
  • Manages and develops new data processes so as to enhance existing systems to support new requirements
  • Programs and implements best logical and physical modeling practices on various data platforms to cleanly integrate into existing enterprise data models; executing data model components to achieve efficient storage utilization and best query performance
  • Gathers user interaction and requirements from internal customers and consults them on best practices to effectively use Big Data platforms as a data and computing resource; provides management best recommendations and follow on solutions to support programming maintenance and growth to achieve strategic and operational goals
Preferred Qualifications:
  • 3+ years’ experience working on Big Data technologies
  • 2+ years’ experience managing an enterprise-scale Hadoop (or similar open source technologies) environment
  • Experience in evaluating, selecting and implementing cloud data storage technologies
  • Experience implementing Big Data solutions based on Hadoop tools like Hive, HDFS and MapReduce in a live production environment
  • Advanced working knowledge of Spark, Hadoop ecosystems, Kafka, Zookeeper, MySQL, Scala, and MongoDB
  • Strong understanding of MapReduce or similar paradigms
  • Strong background in data modeling, object oriented programming and/or functional programming
  • Experience working with stream processing
  • Experience working with large (1TB+) datasets
  • Solid understanding of at least one RDBMS (Postgres, MySQL, MSSQL, Oracle, etc.)
  • Prior experience with different Hadoop distributions like Cloudera, HortonWorks or MapR
  • Significant programming experience in multiple languages like Python, Java, Ruby, or C#
  • Thorough knowledge of database technologies ranging from RDBMS databases to NoSQL databases
  • Previous experience developing and supporting large enterprise deployments of distributed architectures including J2EE deployments
  • Experience working with various IT development techniques like Agile and DevOps

Previous MonthNext Month