Senior Big Data Engineer, Data & Analytics
- Permanent position
- Two new headcount open
- Competitive remuneration package
Our Client
A leading communications company.
The Opportunity
This role is accountable for our client's big data implementation capabilities:
- Design and implement big data solutions in a distributed systems.
- Enable machine learning environment for advanced analytics.
- Ensure that proposed solutions are aligned and conformed to the big data architecture guidelines and roadmap.
The job responsibilities are as follows:
- Evaluate and renew implemented big data architecture solutions to ensure their relevance and effectiveness in supporting business needs and growth.
- Design, develop and maintain data pipelines, with a focus on writing scalable, clean, and fault-tolerant code to handle disparate data sources, process large volume of structured/ unstructured data from various sources.
- Understand business requirements and solution designs to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
- Support and maintain previously implemented big data projects, as well as provide guidance and consultation on other projects in active development as needed.
- Drive optimization, testing and tooling to improve data quality.
- Document and communicate technical complexities completely and clearly to team members and other key stakeholders.
- Develop architecture solutions for varied latency needs like batch, real-time, near-real-time and on-demand APIs.
- Work closely with our data scientist team to gather data requirements to support modelling.
- Review and approve high level & detailed designs to ensure that the solution delivers to the business needs and at the same time, aligns to the data & analytics architecture principles and roadmap.
- Help establish and maintain the data governance processes and mechanisms for data lake and EDW.
- Understand various data security standards and use secure data governance tools to apply and adhere to the required controls on a per data set basis for user access.
- Maintain and optimize the performance of our data analytics environment.
Your Background
- Degree qualified in Business management, IT, Computer Systems, software or computer engineering fields or equivalent.
- Minimum 6 years of experience in data warehousing/ big data environments.
- Experience in relational & dimensional data modeling and performance tuning of enterprise warehouses/ big data environments.
- Experience with big data processing (Spark experience preferred).
- Experience in designing and developing data models, integrating data from multiple sources, building ETL pipelines, and other data wrangling tools in big data environments.
- Understanding of structured and unstructured data design/modeling.
- Experience using software engineering best practices in programming, testing, version control, agile development, etc.
The ideal candidate should have most of these technical competencies:
- Hadoop / Big Data knowledge and experience
- Design & Development based on Hadoop platform and it's components
- Big Data Platform based on Cloudera on Hadoop
- Informatica Big Data Management
- Python / Spark / Scala / Java
- HIVE / HBase / Impala / Parquet
- Sqoop, Kafka, Flume
- SQL
- Relational Database Management System (RDBMS)
- NOSQL database
- Data warehouse platforms or equivalent
And these are the essential skill sets for the role:
- Highly organized, self-motivated, pro-active, and able to plan.
- Ability to analyze and understand complex problems.
- Ability to explain technical information in business terms.
- Ability to communicate clearly and effectively, both verbally and in writing.
- Strong in User Requirements Gathering, Maintenance and Support.
- Good experience managing users and vendors.
- Agile Methodology.
- Data Architecture, Data Modeling of BI Applications/ Data Warehouse/ Big Data.
Interested candidates can send their resume to Maricris.Fermin@peoplebank.asia or apply online.
