Design, build and implement efficient, scalable and high performance data pipelines, data model and machine learning model for various industries.
Part of the job also puts you in development, POC building, prototyping, research, trying out new tech/tools, dreamt of forward-looking use cases and build it.
To be curious, tinker, master, and share, are inherent part of the job description and requirements.
Requirement
Exploration and learning new tech in big data, data warehouse, business analytics, machine learning is the name of the game, you will be required to think out of the box, analytical, critical and curious, and be always asking “why?”, “can we do this smarter?”.
Bachelor Degree in IT, Engineering, Mathematics, Statistics, data science or related degrees.
Software development experience using Java, C#, Python, R, etc. is an advantage.
Development capability writing expert-level, maintainable, and robust code.
Excellent exploration, tinkering skill and “always hungry” mindset of learning and adapting to new technologies / tools.
Strong knowledge in SQL and experience with relational database systems.
Self-starter and work independently with minimum supervision.
Organized, meticulous and detail-oriented.
Able to perform in a demanding, kept changing and fast-paced environment.
Fluent in written English is a must.
Must able to communicate verbally in English.
Passion to expand current knowledge and learn new skills.
Previous exposure to the Hadoop ecosystem, or machine learning, or AI, is an advantage
Knowledge of NoSQL databases is an advantage
Knowledge and/or exposure in IoT devices and technology is an advantage.
Knowledge and/or exposure visualization tools to build report/dashboard is an advantage.
Qualifications & other requirements
You should have or be completing the following to apply for this opportunity.