Or your alerts
2 months ago

Job Summary

Our data team plays a pivotal role in driving analytics into our go-to-market and R&D investments, while also embedding powerful reporting into our customer-facing products. This enables gym owners to make decisions that optimize their business performance, from predicting membership churn to identifying upsell opportunities. Our data scientists and data engineers collaborate closely with our executive team and customers to integrate analytics into the core of our strategic decision-making process.

  • Minimum Qualification : Degree
  • Experience Level : Mid level
  • Experience Length : 3 years

Job Description/Requirements

What you will be doing:
  • Design, build, and maintain scalable data pipelines and infrastructure that enable real-time data processing, analytics, and machine learning.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data needs and translate them into efficient data solutions.
  • Develop and optimize ETL processes to ingest, transform, and store data from various sources, ensuring data quality, reliability, and performance.
  • Lay the foundation for our data infrastructure by building core models in our data warehouse, version controlling them using tools like DBT, and ensuring all relevant data is integrated into the warehouse.
  • Clean, process, and link data from multiple sources to create a single, unified customer view that enables advanced analytics and personalized experiences.
  • Continuously monitor and troubleshoot data systems, ensuring high availability and reliability.

What you’ll bring to the table:
  • A minimum of 3 years of experience in data engineering, with a strong understanding of data warehousing, data modeling, and ETL processes.
  • Proficiency in SQL and experience with big data processing frameworks such as Apache Spark, Hadoop, or Flink.
  • Experience with cloud platforms such as AWS, GCP, or Azure, and their data services (e.g., Redshift, BigQuery, Databricks).
  • Strong programming skills in languages such as Python, Scala, or Java, with experience in data processing libraries (e.g., Pandas, Dask, Koalas).
  • Familiarity with data orchestration tools like Airflow, Luigi, or Dagster, and experience with version control systems like Git.
  • Excellent problem-solving skills and the ability to design and implement efficient, scalable, and maintainable data solutions.
  • Optional: Experience with streaming data technologies (e.g., Kafka, Kinesis) and knowledge of data visualization tools (e.g., Tableau, Looker).

Important Safety Tips

  • Do not make any payment without confirming with the Jobberman Customer Support Team.
  • If you think this advert is not genuine, please report it via the Report Job link below.
Report Job

Share Job Post

Lorem ipsum dolor (Location) Lorem ipsum GHS Confidential

Job Function : Lorem ipsum

2 years ago

Lorem ipsum dolor (Location) Lorem ipsum GHS Confidential

Job Function : Lorem ipsum

2 years ago

Lorem ipsum dolor (Location) Lorem ipsum GHS Confidential

Job Function : Lorem ipsum

2 years ago

Stay Updated

Join our newsletter and get the latest job listings and career insights delivered straight to your inbox.

We care about the protection of your data. Read our privacy policy.

This action will pause all job alerts. Are you sure?

Cancel Proceed
Report Job
Please fill out the form below and let us know more.
Share Job Via Sms

Preview CV