Rubik is a venture-backed Real Estate technology company based in New York City. Our team has built an Investment Platform for Institutional Investors to deploy capital into the $4.4 trillion U.S. rental home market.
We leverage Data Science and Machine Learning to provide proprietary investment opportunities to Real Estate Investment Trusts, Hedge Funds and Family Offices. At Rubik, we are on a mission to redefine the outdated real estate investment industry through using cutting-edge technologies.
Rubik has just closed a $3.5 million Seed Round from prominent venture capital firms in Silicon Valley, New York City and beyond. We are now expanding our product development team in Prishtina, Kosovo to accelerate growth as well as bring in new perspectives and skills in our company.
A glimpse of Rubik:
- A fast-growing technology company backed by prominent investors in the United States and Europe.
- A diverse team representing five nationalities and eleven languages.
- A culture based on intellectual pursuit, commitment, extreme ownership, humility, collaboration, meritocracy and fun.
- Hybrid work model; work remotely on Fridays.
- A range of team activities such as Weekly Happy Hours and getaways.
- An attractive compensation package, including perks such as stock options, transportation stipend, work-from-home stipend and health insurance.
Working as part of an elite team of back-end engineers, you will apply your skills and grow your technical knowledge through working on our core data platform. You will develop and extend Rubik’s existing back-end technologies, contribute to improving existing workflows and processes, architect and build technical solutions to solve product problems and collaborate with teammates to ship new working features fast. As a Senior Engineer you will also provide technical leadership and cross-training to peers in areas of your expertise.
You will be responsible for expanding and improving our cloud native data platform, building out new features, deploying and maintaining them as necessary. The hire will work with various components of our platform, including data pipelines, APIs and internal analytics algorithms.
What you will do:
- Leverage cloud-native technologies like AWS and GCP to expand and improve Rubik’s data pipeline to efficiently extract, store, analyze and retrieve large amounts of data as well as to expand the back-end part of Rubik’s serverless web applications.
- Develop scalable APIs used to run complex queries and operations efficiently on Rubik’s data platform.
- Develop various ETL processes for transforming and extracting inbound data.
- Develop systems for data analysis and predictive analytics.
- Build various systems for automated marketing via Email, SMS and other channels using 3rd party APIs.
- Work on integrating various automated marketing components with the core data platform.
- Set up monitoring systems to track errors & throughput activity of pipelines.
- Expand testing coverage across unit and integration testing for all systems.
- Set up and maintain CI/CD pipelines for all systems.
- Provide technical leadership and cross-training to peers in areas of your expertise.
- Provide feedback and support to management in terms of sprint and roadmap planning to ensure delivery of working code, efficient development processes and long term product quality.
We are looking for candidates with:
- 5+ years of relevant technical experience.
- Proficiency in Python.
- Proficiency in ETL frameworks including Pandas & NumPy.
- Proficiency in SQL and significant experience working with relational databases.
- Experience designing, developing and documenting RESTful APIs.
- Experience working with AWS including S3, Lambda, RDS, SQS, API Gateway (or equivalent).
- Experience working with containers and containerization software.
- Experience with container deployment & orchestration through Kubernetes (or equivalent).
- Experience writing unit, integration, and API tests.
- Experience developing serverless web applications/ using serverless architecture is a bonus.
- Experience working on data products (think AttomData), Big Data Projects or data platforms (think Snowflake) is a bonus.
- A fast and iterative approach to programming.
- The ability to work independently.
- The ability to learn and adapt fast.
To apply, please send your resume at [email protected]