Build Hadoop/Big Data test methodologies (functional/non-functional) in high volume/frequency data testing, prepare Hadoop Test Plan, design Hadoop Test Cases, test Scenarios & test scripts
- Prepare test data for test cases, execute Hadoop Test Cases.
- Test workflow processes, detect bugs through the queries, finding bad data.
- Automate testing efforts by using Hadoop tools like Spark/Hive/HBase/Oozie and share test results
- Conduct Hadoop (Spark) performance testing by using standard tools
- Traditional and Agile software development and Testing environments
- Proactively identify risks and issues across the QA work streams, advise on increasing process maturity, help develop testing standards, best practices and processes
- Working experience in customer facing onsite role coordinating with QA test engineers and Dev team leads.
We are seeking highly motivated candidates with a strong QA background in distributed high volume big data testing using data warehouse, MPP (DW) appliance, Hadoop and its ecosystem technologies.
The ideal person for this will understand all technical aspect of Data flow into Hadoop cluster, distributed computing using Data warehouse, MPP-DW, ETL, ELT, SQL, Apache Spark ecosystem, DevOps and continuous testing, business intelligence and advanced analytics testing and reporting processes.
The Ideal Candidate Will Have:
- Must have 2 + years of hands-on live project QA experience with on one or more Big Data Hadoop platforms like Hortonworks or Cloudera
- Must have 1 + years testing Big Data programs written in Spark (Scala / Python), Storage (Hive / Impala / Kudu / HBase), consumption (BI Spotfire) in pre--production cluster environment
- 1+ years testing on Data warehouse & BI applications build on SQ Server, Oracle, Teradata
- 1+ years of ETL testing using one or more platform such as Informatica or Talend or SSIS