Overview

Job description

· Development of Hadoop architecture

· Application design on Apache Hadoop

· Analyze business and technical requirements

· Selecting and integrating big data applications and frameworks required to provide requested capabilities

· Implementing ETL process, ETL development

· Streaming data processing

· Administration of Apache Hadoop cluster with all included services

· Monitoring performance and propose any necessary infrastructure changes

· Plan, execute and support implementations

· Participating on project documentation

Requirements

· English on communication level

· Experience in area of Big Data or Business Intelligence

· Knowledge of Hadoop ecosystem

· Data and process modeling

· Knowledge of SQL and NoSQL databases such as HBase, Cassandra, MongoDB

· Good knowledge of Big Data querying tools, such as Hive and Impala

· Knowledge of various ETL techniques and frameworks, such as Flume

· Experience with various messaging systems, such as Kafka

· Team player

· Experience in finance or banking sector

· Experience in Java or Scala programing

· Unix shell scripting

· Knowledge of data quality and security principles in Big Data ecosystem

We offer

· Benefits such as motivating yearly bonuses. extra week of holidays, sick days/year, flexible working hours, loyalty bonus, allowance to pension scheme, meal allowance.

· For comfortable work: notebook, cell phone, technical trainings, corparate events and team buildings, relax & active room.

Upload your CV/resume or any other relevant file. Max. file size: 64 MB.
Upload your CV/resume or any other relevant file. Max. file size: 64 MB.


You can apply to this job and others using your online resume. Click the link below to submit your online resume and email your application to this employer.