Role: Sr Big Data Engineer
Duration: Long Term Contract
Location: Remote [can work from your current location]
Key Skill: Azure Databricks
• 9+ years of hands-on Software Development experience.
• 4+ years of experience in Hadoop, MapReduce, HDFS, Spark, Streaming
• 2+ in at least one major cloud platform(Azure preferred)
• Minimum of 7 years of development experience with Java or Scala, Python, XML, Web Services
• 2+ years of experience in Kafka, streaming, NoSQL databases (Cassandra/COSMOSDB preferred), Docker, Kubernetes
• Thorough understanding of service-oriented architecture (SOA) concepts.
• 5 or more years of previous relational database experience
• Previous experience with Agile/SCRUM methodology/best practices
• Previous experience and successful track-record of learning new tools and technologies and leveraging these on integration and implementation projects
• Bachelor’s degree in a technical field, or equivalent experience.
• Design, code, test, document, and maintain high-quality and scalable Big Data solutions
• Research, evaluate, and deploy new tools, frameworks, and patterns to build sustainable Big Data platform.
• Identify gaps and opportunities for improvement of existing solutions
• Define and develop APIs for integration with various data sources in the enterprise
• Analyse and define customer requirements.
• Assist in defining product technical architecture.
• Make accurate development effort estimates to assist management in project and resource planning.
• Create prototypes, proof-of-concepts & design and code reviews
• Collaborate with management, quality assurance, architecture, and other development teams
• Write technical documentation and participate in production support.
• Keep skills up to date through ongoing self-directed training. The ideal candidate will be a self-starter who can learn things quickly who is enthusiastic, active, and eager to learn
Please share the profiles with [email protected]
To apply for this job email your details to firstname.lastname@example.org