Software Development
About Rapido We are India’s largest bike-taxi platform, steadily venturing into Auto, Delivery, Rental, and more. Currently, present in ~100 cities, we are growing close to ~50% year-on-year with steady funding. We have changed the concept of intra-city travel and made last-mile connectivity affordable to all. Rapido Cabs made its debut on May 2023, marking the commencement of its services in eight cities. Notably, Hyderabad, Bangalore, and Delhi stand out as the primary cities for this initial launch. However, the ambitious plan includes an extensive expansion to encompass over 25 cities in the coming 12 months. Along with being the #1 choice of 40 million people, we have also built a solid captain base of over 5 million registered captains, who have bettered their lives with Rapido. As an employer, we provide a lot of ownership to our team members providing multiple avenues for them to grow within the company. You will only grow with us with the right balance of ambition, fun, and transparent work culture! Opportunities don't happen, you create them! Job Summary: As a Data Engineer, you will design, create and implement optimal data pipelines. The best part of this role is the ability to own multiple data pipelines and govern its quality and longevity. You will be seeing that your pipelines will be affecting the daily business of Rapido. You will be part of a young growing team that delivers value to our users through creative improvements to our data platform and other data offerings. Striking a balance between speed of delivery and quality. Also work with petabytes of data and hundreds of complex data pipelines. Job Responsibilities: Create complex data processing pipelines with quality and correctness in mind using tools like Spark or Flink.You will collaborate with other developers across teams( All PODs(Product Oriented Delivery) in Engineering team), write code you are proud of and play around with cutting edge data technologies.Design and model data schema which needs to be ingested.Design scalable implementations Machine Learning pipelines working closely with Data Scientists.Deploy data pipelines in production based on Continuous Delivery practices.Manage infrastructure/deployment on Kubernetes Job Requirements:Around 2-4 years of experience.Hands-on programming based on Test Driven Development.Functional understanding of Java/Python and any Hadoop stack familiarity is a bonus.Have basic knowledge about Rest API’s, micro services. Exposure to networking will be a plus.Comfortable with optimizing big data pipelines.Like solving problems, designing data structures and Algo.Is self-reflective, has a hunger to improve, has a keen interest to drive their own learning. Applies theoretical knowledge to practice What’s there for you?In the data team at Rapido, you will get an exposure to every stack possible – big data, software-engineering(java/scala), ML-ops,data-ops and more.Be a part of a platform which serves hundreds of users with their data needs with state of the art Trino clusters.Get to experience real time applications deployed in Flink.Create and manage tools and frameworks built on top of open source technologies.Work with mammoth kubernetes clusters which host plethora of applications and tools.Hands on experience on open source big data tools and its best practicesExcited to solve challenges? Join Rapido & chase bigger milestones too!
Software Development