Senior Data Engineer
Job description
A fantastic opportunity to join a rapidly growing global tech consulting company as part of their data and AI team.
This role will work closely with key clients on data platform design and architecture. Candidates that have worked on AWS or Azure will do well in this role.
Responsibilities
Data Pipeline Management: Design, implement, and maintain scalable data pipelines to process structured and unstructured data from multiple sources.
Data Infrastructure: Build and manage data warehouses and lakes, focusing on efficient storage, retrieval and support for business analytics.
Cloud Platform Utilisation: Use cloud services (AWS, Azure, GCP) to deploy and optimise data infrastructure for performance, cost-effectiveness, and reliability.
Data Governance & Quality: Implement checks, validations and governance frameworks to ensure data accuracy, integrity and compliance.
Optimisation: Continuously improve data pipeline performance, reduce latency and optimise processing speed.
Collaboration: Work with data scientists, analysts and stakeholders to understand their data needs and deliver technical solutions aligned with business goals.
Data Presentation: Support business users with data visualisation tools like Tableau and PowerBI.
Innovation: Stay updated on emerging technologies in the big data space and implement innovative solutions to enhance data processing.
Data Infrastructure Support: Help define and optimise the underlying data infrastructure.
Requirements
Educational Background: Bachelor's, Master's, or PhD in IT, Information Management, or Computer Science with at least 6 years of experience.
Big Data Knowledge: Strong understanding of big data technologies like Hadoop, Spark, and NoSQL databases (e.g., Cassandra, MongoDB, ElasticSearch).
Data Ingestion & Processing: Experience with ETL processes and batch jobs to handle data from multiple sources.
Querying Tools: Proficiency in tools like Hive, Spark SQL and Impala for querying data.
Real-Time Processing: Familiarity with or willingness to learn about real-time stream processing using technologies like Kafka, AWS Kinesis, or Spark Streaming.
DevOps/DataOps: Interest or experience in DevOps/DataOps principles, including Infrastructure as Code and automating data pipeline workflows.
Data Science Understanding: High-level knowledge of data science processes (e.g., model building, training, and deployment).
Passion for Technology: A strong passion for technology and continuous learning.
Salary: From SGD$120k - 140k per year (negotiable)
If you are interested in this job and would like to have a discussion, please contact joel@tenten-partners.com.
Equal Opportunity Statement
TENTEN Partners is an equal opportunity firm and is committed to providing equal employment opportunities to all qualified individuals without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, age, disability, or any other protected characteristic as outlined by applicable.