Location: Atlanta, GA
Job Description
Responsibilities
- Minimum of 7+ Years of experience in IT
- Proficient in Java and fixing/redesigning native libraries
- Create Scala jobs for data transformation and aggregation with a focus on the functional programming paradigm.
- Knowledge of Python
- Familiarity with Unix scripting
- Demonstrate issues faced in implementation and performance tuning experience in previous project and best practices following in performance implementations
- Help in designing, building, and enhancing platform and ensure that developed components are testable, repeatable, highly performant, scalable, and automated.
- Produce unit tests for Spark transformations and helper methods
- Write Scala doc-style documentation with all code & design data processing pipelines
- Spark query tuning and performance optimization
- Batch interfaces and Real-time interfaces like Kafka messaging
- Good presentation and communication skills to express what is being worked on day-2-day activities
- Past experience in BigData development is preferable (Streaming on Apache Flink or Apache Spark) or else should be able to demonstrate conceptual differences between batch and streaming data processing
- Past experience in DataWarehouse/ETL/SQL development is preferable who can understand the E2E data flow or the application architecture
- Azure Integration and deployment experience
- Quick learner who can come out with alternate options to solve an application problem
From:
Ashok,
Biztegy
ashok@ba-itconsult.com
Reply to: ashok@ba-itconsult.com