案件名: Business Intelligence Engineer-NN
案件種類: Permanent
雇用形態: Full-time
給与: 交渉可
所在地: Tokyo, Japan
掲載済み案件: 2024-11-27
案件ID: 55753

職務内容

Business Intelligence Engineer for Top Fintech Firm

 

 

■ Job Title

Business Intelligence Engineer

 

■ Company Overview

Top Fintech firm

 

■ Your Role and Responsibilities 

(Upon Employment) 

Data Modeling & Feature Engineering: Identify the business’s data needs and key metrics for analytics and reporting. Design and implement efficient, flexible data models to improve the calculation of these metrics.

ETL Pipeline Development: Build and maintain ETL pipelines to update tables that track these metrics for business intelligence and machine learning. Ensure accurate, efficient data processing and storage while maintaining data integrity and accessibility across platforms.

Enhance Analytics Capabilities: Collaborate with the risk team to streamline the development and maintenance of key reports and dashboards, leveraging tools like Looker to create flexible, explorative data views.

Elevate Machine Learning: Assist ML data scientists by developing batch ML pipelines using ETL and SageMaker jobs to train and predict models, capturing output for use in analytics and reporting.

Cross-Functional Collaboration: Partner with data scientists, engineers, product managers, and stakeholders to align data efforts with business goals. Ensure the smooth integration of new data assets and models into operational processes


(Scope of any potential changes) Duties as defined by the company 

 

■ Experience and Qualifications

Stakeholder Collaboration: Passionate about working with business stakeholders and data scientists to deliver impactful business value. Proactive in taking ownership of projects and independently developing innovative solutions.

Industry Experience: While not required, experience in financial services, payment services, or fraud prevention is a plus. A strong interest in these areas is essential.

Technical Proficiency:

  • Metrics Translation: Skilled in translating industry-specific metrics and definitions into well-documented, efficient code.
  • Spark Development: Experience building production Spark applications for batch ETL pipelines and processing terabyte-scale data using optimized SQL (Scala preferred for Spark, though PySpark is acceptable with a willingness to learn Scala).
  • Job Orchestration: At least 2 years of experience developing ETL pipelines using job orchestration tools like Airflow or Prefect.
  • Data Integration: Proven experience creating data marts or sources used by data scientists, data analysts, and business users.
  • Business Intelligence Tools: Experience building data marts for BI tools such as Looker, Tableau, or PowerBI.

 

 

■ Work Location

(Upon Employment) Tokyo, Full Remote
(Scope of change) Location as specified by the company  

 

■ Salary 

Up to 12M

 

 

Details will be provided during the meeting.