The company develop some of the most innovative and world-leading products globally, operating in multiple industry sectors. They are driven by the passion to create some of the worlds’ most cutting edge technology and tools for the digital age. There systems combine industry-leading performance with scalability to ensure productive workflows for exciting and interesting industry sectors. There’s a reason they have been nominated for and won multiple innovation awards!
The role involves translating complex functional and technical requirements into detailed design. Implementing complex big data projects with a focus on collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable deliverables across customer-facing platforms
- Masters, PhD, or equivalent experience in a quantitative field (Computer Science, Mathematics, Engineering, Artificial Intelligence, etc.)
- Programming proficiency in Python (NumPy, pandas etc.), R, Octave, C++/Java
- Data modelling and cleansing experience in above languages
- Extensive experience working with big data sets
- Experience with pre-processing tools like Hive, Impala, Spark, and Pig
- Exposure to SQL and relational databases (MongoDB, DynamoDB etc.)
- Prior experience with one or more of the following: collaborative filtering, clustering, classification, segmentation, regression, decision trees, statistical models
- Excellent communication, team work and a results-oriented attitude
- Proficiency in problem solving and debugging
- Experience in Healthcare domain.
- Experience working with the Hadoop Ecosystem
- Experience working with different data sources (REST APIs, different database technologies, etc.)