Azure - Senior Data Engineer
Rackspace Technology
Engineering & Technology
Job Summary
Design, develop, and maintain real-time data streaming pipelines using Spark Streaming or Azure Functions. We are seeking a highly skilled Senior Data Engineer or Data Architect with extensive experience in Synapse and Azure data services. The ideal candidate will have a deep understanding of Spark and Lakehouse architectures, as well as a proven track record of implementing and managing data solutions using Synapse services and other Azure tools.
- Minimum Qualification : Degree
- Experience Level : Senior level
- Experience Length : 5 years
Job Description/Requirements
- Proven experience in real-time data streaming using Spark Streaming or Azure Functions.
- Strong proficiency in Python for data processing and automation tasks.
- Hands-on experience with Kafka for data ingestion and message queuing.
- Familiarity with Redis for caching and fast data retrieval.
- Knowledge of data lake architectures and best practices for data storage and retrieval.
- Solid understanding of software engineering principles, including design patterns, testing, and documentation.
- Strong problem-solving skills and ability to work in a fast-paced, collaborative environment.
Key Responsibilities
- Design, develop, and maintain data architectures using Spark and Lakehouse methodologies.
- Utilize Synapse services including Spark pools, dedicated SQL pools, and pipelines to build robust data solutions.
- Leverage other Azure data services to enhance and optimize data workflows.
- Implement and manage CI/CD pipelines to ensure smooth deployment and integration processes.
Qualifications
- Strong expertise in Spark and Lakehouse architectures.
- Hands-on experience with Synapse services: Spark pools, dedicated SQL pools, and pipelines.
- Proficiency with additional Azure data services.
- Solid experience with CI/CD processes and tools.
- Excellent problem-solving skills and the ability to work in a fast-paced environment.
- Join our team and contribute to cutting-edge data solutions in a dynamic and innovative setting.
- Load, merge, and process machine logs from Kafka, ensuring efficient data flow and transformation.
- Integrate processed data into Redis cache and send it to the data lake for long-term storage and analysis.
- Implement and optimize data processing solutions using Python.
- Apply software engineering best practices, including code reviews, version control, and continuous integration/deployment (CI/CD).
- Collaborate with cross-functional teams to understand requirements and deliver high-quality data solutions.
Important Safety Tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.