data engineer (google cloud) | budapest

randstad hungary
job type
apply now

job details

job category
job type
reference number
27857 / 28798
randstad hungary
Click on the "Apply" button and choose from the application options that requires not more than 2 minutes.
For some of our positions we conduct video interviews. If you wish to learn more about how to prepare yourself effectively, here are some useful tips & tricks to a successful online introduction.

Cégleírás / Organisation/Department

Our multinational client is growing in the IT field. They have new It projects and we are looking for a Data Engineer to be part of their team. You will work on data engineering and business insights pipelines, deployed in a hybrid cloud environment (Hadoop on-prem ecosystem plus Google cloud). You will develop pipelines and scripts running on big data clusters (e.g. on Spark or GCP Big query). It is also of essence to ensure good code quality and documentation for the delivered artifacts.

Pozíció leírása / Job description

If you join as a Data Engineer, your main responsibilities will be:

  • Develop data engineering artifacts using Linux shell scripting, Python, Java and SQL scripting
  • Develop ETL processes and jobs in Google cloud and Hadoop ecosystem (e.g. Spark jobs)
  • Perform testing, code reviews
  • Ensure high code quality
  • Understand architectural issues, and factor them into decisions and recommendations
  • Own software design, collaborate within development team
  • Write good documentation
  • Report and track progress in JIRA, Confluence

Elvárások / Requirements


Degrees: BSc in IT related fields

IT skills (in details with required levels, years of experience):

  • Experience in building BI pipelines and artifacts in cloud ecosystems (preference on Google Cloud Platform, but alternatives can be considered as well)
  • Preference: experience with Google Cloud Platform for BI applications (e.g. Big Query, Data Fusion, Cloud Storage, etc.), but alternatives can be considered as well
  • Strong experience with Hadoop ecosystems (HDFS, Hbase, Hive, Spark)
  • Experience in Java development (junior/intermediate, 2-3 years)
  • Experience with SQL databases and SQL scripting (intermediate, 3+ years)
  • Good experience within ETL processes and tools (2+ years)
  • Experience with data modeling tools

Experience in the execution of certain tasks and concepts:

  • Software development, testing and documentation (3+ years)
  • Creating and testing ETL processes (2+ years)
  • Data management concepts (lifecycle management, data lineage, warehousing, ETL, etc.)

Language skill expected:

  • Fluency in English (C1)


  • Experience with Scala language
  • Certification in Google Cloud (any)
  • Working with automated CICD solutions (i.e. version control, JFrog, Automic)

Amit kínálunk / Offer

  • Market competitive compensation package
  • A great diversity of people, cultures and business areas
  • Language courses and training
  • Global and diverse working environment
  • On-site relax & gym rooms
  • Volunteering, sport and employee events
  • Remote work and home office opportunities

Kapcsolattartó / Information

Rác Ildikó