Experience in designing and implementing data solutions on the Databricks platform.
Proficiency in programming languages such as Python, Scala, or SQL.
Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.
Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.
Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.
Good to have experience with containerization technologies such as Docker and Kubernetes.
Knowledge of DevOps practices for automated deployment and monitoring of data pipelines
Design and develop data processing pipelines and analytics solutions using Databricks.
Architect scalable and efficient data models and storage solutions on the Databricks platform.
Collaborate with architects and other teams to migrate current solution to use Databricks.
Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.
Use best practices for data governance, security, and compliance on the Databricks platform.
Responsibilities
Experience in designing and implementing data solutions on the Databricks platform.
Proficiency in programming languages such as Python, Scala, or SQL.
Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.
Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.
Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.
Good to have experience with containerization technologies such as Docker and Kubernetes.
Knowledge of DevOps practices for automated deployment and monitoring of data pipelines
Design and develop data processing pipelines and analytics solutions using Databricks.
Architect scalable and efficient data models and storage solutions on the Databricks platform.
Collaborate with architects and other teams to migrate current solution to use Databricks.
Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.
Use best practices for data governance, security, and compliance on the Databricks platform.
Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :Flexera | Databricks, Py spark, SQL, Spark, AWS
This is a Majesco Product used by Aetna Voluntary Business. As part of this project, we have different modules like PAS, GB, DM. We have requirement for resources in skillset Java, SQL and Majesco internal tools.
Majesco PRASE – (Product Rules and Scripting Engine) is a proprietary language and engine used for configuring and customizing Majesco's insurance software, particularly its L&A (Life & Annuity) and P&C (Property & Casualty) core products. It provides insurers with the flexibility to define and manage complex business r
Responsibilities
This is a Majesco Product used by Aetna Voluntary Business. As part of this project, we have different modules like PAS, GB, DM. We have requirement for resources in skillset Java, SQL and Majesco internal tools.
Majesco PRASE – (Product Rules and Scripting Engine) is a proprietary language and engine used for configuring and customizing Majesco's insurance software, particularly its L&A (Life & Annuity) and P&C (Property & Casualty) core products. It provides insurers with the flexibility to define and manage complex business r
Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance