Hadoop Engineer

FULL_TIME 3 weeks ago
Employment Information
  • Strong experience in programming languages such as Python, Java, or Scala for data manipulation and engineering tasks.

  • Expertise in SQL and NoSQL databases.

  • Hands-on experience with big data technologies like Hadoop, Spark, Kafka, and Hive to handle large-scale data processing and real-time data streams.

  • In-depth knowledge of data warehousing solutions such as Amazon Redshift, Google BigQuery, and Snowflake for building and managing data warehouses.

  • Proficiency in designing, developing, and maintaining ETL (Extract, Transform, Load) processes using tools like Apache NiFi, Talend, or Informatica.

  • Familiarity with cloud platforms like AWS, Azure, or Google Cloud for deploying, managing, and scaling data infrastructure and services.

  • Strong understanding of data modeling concepts and techniques to create efficient and scalable data models.

  • Experience with version control systems such as Git for code management and collaboration.

  • Knowledge of data governance, data quality standards, and data security practices to ensure compliance and protection of sensitive information