This role provides an opportunity to be a part of the Data Engineering team at Near. You will get an exposure to work on a huge scale of data, cutting-edge tech stack, and leverage your skills and toolset to help us build a high-value and scalable product. You will be responsible for developing techniques to enhance data, You will need to collaborate with Data Scientists, SW Engineers, and UI Engineers and work as a part of a high-performance team and solve problems.
You will be part of one of the fastest growing Enterprise SaaS companies – a great opportunity for people who can work independently and are self-driven.
- Design and implement our data processing pipelines for different kinds of data sources, formats and content for the Near Platform. Working with huge Data Lakes, Data Warehouse and Data Marts are part of this challenging role.
- Design and develop solutions which are scalable, generic and reusable.
- Responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.
- Develop techniques to analyze and enhance both structured/unstructured data and working with big data tools and frameworks.
- Collaborate closely with Data Scientists and Business Analysts to understand data and functional requirements.
- Design, build and support existing data pipelines to standardize, clean and ingest data.
- Participate in product design and development activities supporting Near’s suite of products.
- Liaise with various stakeholders across teams to understand business requirements.
Skills and Requirements
- You should hold a B.Tech/M.Tech degree.
- You should have 2-4 years of experience with a minimum of 2 years in working in any data driven company/platform. Competency in core java is a must.
- You should have worked with distributed data processing frameworks like Apache Spark, Apache Flink or Hadoop.
- You should be a team player and have an open mind to approach the problems to solve them in the right manner with the right set of tools and technologies by working with the team.
- You should have knowledge of frameworks & distributed systems, be good at algorithms, data structures, and design patterns.
- You should have an in-depth understanding of big data technologies and NoSql databases (Kafka, HBase, Spark, Cassandra, MongoDb etc).
- Work experience with AWS cloud platform, Spring Boot and developing API will be a plus.
- You should have exceptional problem solving and analytical abilities, and organisation skills with an eye for detail.
The Job is closed. Check the latest active jobs here.