Job description / Role
For a Fintech company that is a leading provider of AI-based Big Data analytics.
About the company:
We are dedicated to helping financial organizations combat financial crimes through money laundering and facilitating malicious crimes such as terrorist financing, narco-trafficking, and human trafficking which negatively impact the global economy.
We are looking for a Data Engineer to join our growing team of data experts.
The hire will be responsible for designing, implementing, and optimizing data pipeline flows within the system.
The ideal candidate has experience in building data pipelines and data transformations and enjoys optimizing data flows and building them from the ground up.
The Data Engineer will support our data scientists with the implementation of the relevant data flows based on the data scientist’s features design and construct complex rules to detect money laundering activity.
They must be self-directed and comfortable supporting multiple production implementations for various use cases.
- Implement and maintain data pipeline flows in production within the system based on the data scientist’s design
- Design and implement solution-based data flows for specific use cases, enabling the applicability of implementations within the product
- Building a Machine Learning data pipeline
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Work with product, R&D, data, and analytics experts to strive for greater functionality in our systems
- Train customer data scientists and engineers to maintain and amend data pipelines within the product
- Travel to customer locations both domestically and abroad
- Build and manage technical relationships with customers and partners
- Travel to customer locations abroad
- 3+ years of hands-on experience with SQL.
- Experience with Spark scripting languages: PySpark/Scala/Java/R
- Hands-on experience with data transformation, validations, cleansing, and ML feature engineering
- Hands-on experience working with Apache Spark cluster - an advantage.
- BSc degree or higher in Computer Science, Statistics, Informatics, Information Systems, Engineering, or another quantitative field.
- Experience working with and optimizing big data pipelines, architectures, and data sets - an advantage.
- Strong analytic skills related to working with structured and semi-structured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Business-oriented and able to work with external customers and cross-functional teams.
- Fluent in English, both written and spoken
Nice to have
- Experience with Linux
- Experience in building Machine Learning pipeline
- Experience with Elasticsearch
- Experience with Zeppelin/Jupyter
- Experience with workflow automation platforms such as Jenkins or Apache Airflow
- Experience with Microservices architecture components, including Docker and Kubernetes.
About the Company
Regional Labs is a new player in the Middle Eastern tech ecosystem. We are a regional network of Innovation Hubs, designed to link the tech ecosystems in the region and thus provide startups with an additional layer of market access and connectivity. Our HQ is in Manama, the capital of the Kingdom of Bahrain.
We work in coordination with governmental entities, key regional players, and local stakeholders in each respective ecosystem, aiming to fortify the breakthroughs made possible by the Abraham Accords through interpersonal relationships, and technological transference.