We found 281 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.

Responsibilities

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.
  • Salary : Rs. 90,000.0 - Rs. 1,65,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

Vulnerability Management, Linux admin with shell scripting, deployment exp.

Responsibilities

Vulnerability Management, Linux admin with shell scripting, deployment exp.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :546901_CIS_ Infrastructure Security

Job Description

• Expertise in ML-based recommendation systems and predictive analytics • Hands-on experience with AI marketing tools for multi-domain campaigns Strong understanding of customer lifecycle, personalization, and cross-platform targeting

Responsibilities

• Expertise in ML-based recommendation systems and predictive analytics • Hands-on experience with AI marketing tools for multi-domain campaigns Strong understanding of customer lifecycle, personalization, and cross-platform targeting
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :551330 | IC | Rakuten | ML Engineer

Job Description

Application Lead – C++ ModernizationRole Summary:As an Application Lead specializing in C++ modernization, you will lead efforts to transform legacy scientific and engineering calculation engines (written in C++, C, and Fortran) into scalable, high-performance services. You will collaborate with cross-functional teams to ensure algorithm accuracy, performance optimization, and seamless integration into modern platforms.________________________________________Key Responsibilities:Legacy Modernization & Architecture• Lead the migration and wrapping of legacy computational logic into modern service-based architecture.• Analyze and document legacy algorithms and business logic.• Design scalable, high-performance calculation engines using modern architectural patterns.• Ensure regression-tested accuracy against legacy systems.Technical Leadership• Define architecture and design standards for computational services.• Conduct technical reviews and mentor engineers on algorithm implementation and optimization.• Establish coding and testing best practices for numerical modules.Algorithm Development & Validation• Oversee the development and validation of scientific and numerical algorithms.• Ensure mathematical precision and performance benchmarks are met.• Implement optimization strategies including parallelization and multi-threading.Service Integration & Extensibility• Design APIs/microservices for platform integration.• Collaborate with backend and platform teams to ensure extensibility and reuse.• Implement versioning and audit capabilities for calculation services.Performance Engineering• Profile and optimize memory and compute performance.• Ensure engines meet SLAs for latency and throughput.• Apply parallel computing techniques where applicable.Team Leadership• Lead a team of engineers across onshore/offshore locations.• Drive sprint planning, code reviews, and knowledge sharing.• Foster collaboration and continuous improvement.________________________________________Required Skills & Experience:Technical Skills• Expert in C++, STL, scientific computing, and numerical methods.• Proficient in C, Fortran, .NET, and performance optimization techniques.• Strong understanding of multi-threading, parallel processing, and computational mathematics.• Familiarity with microservices, REST APIs, Docker/Kubernetes, and Azure.Experience• 10+ years in software development, with 7+ years in C++.• 3+ years in technical leadership roles.• Proven experience in legacy code modernization and scientific computing.Soft Skills• Strong analytical and problem-solving abilities.• Excellent communication and stakeholder engagement.• Leadership and mentoring capabilities.• Detail-oriented with a focus on accuracy and performance.

Responsibilities

Application Lead – C++ ModernizationRole Summary:As an Application Lead specializing in C++ modernization, you will lead efforts to transform legacy scientific and engineering calculation engines (written in C++, C, and Fortran) into scalable, high-performance services. You will collaborate with cross-functional teams to ensure algorithm accuracy, performance optimization, and seamless integration into modern platforms.________________________________________Key Responsibilities:Legacy Modernization & Architecture• Lead the migration and wrapping of legacy computational logic into modern service-based architecture.• Analyze and document legacy algorithms and business logic.• Design scalable, high-performance calculation engines using modern architectural patterns.• Ensure regression-tested accuracy against legacy systems.Technical Leadership• Define architecture and design standards for computational services.• Conduct technical reviews and mentor engineers on algorithm implementation and optimization.• Establish coding and testing best practices for numerical modules.Algorithm Development & Validation• Oversee the development and validation of scientific and numerical algorithms.• Ensure mathematical precision and performance benchmarks are met.• Implement optimization strategies including parallelization and multi-threading.Service Integration & Extensibility• Design APIs/microservices for platform integration.• Collaborate with backend and platform teams to ensure extensibility and reuse.• Implement versioning and audit capabilities for calculation services.Performance Engineering• Profile and optimize memory and compute performance.• Ensure engines meet SLAs for latency and throughput.• Apply parallel computing techniques where applicable.Team Leadership• Lead a team of engineers across onshore/offshore locations.• Drive sprint planning, code reviews, and knowledge sharing.• Foster collaboration and continuous improvement.________________________________________Required Skills & Experience:Technical Skills• Expert in C++, STL, scientific computing, and numerical methods.• Proficient in C, Fortran, .NET, and performance optimization techniques.• Strong understanding of multi-threading, parallel processing, and computational mathematics.• Familiarity with microservices, REST APIs, Docker/Kubernetes, and Azure.Experience• 10+ years in software development, with 7+ years in C++.• 3+ years in technical leadership roles.• Proven experience in legacy code modernization and scientific computing.Soft Skills• Strong analytical and problem-solving abilities.• Excellent communication and stakeholder engagement.• Leadership and mentoring capabilities.• Detail-oriented with a focus on accuracy and performance.
  • Salary : Rs. 0.0 - Rs. 1,80,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Application Developer

Job Description

ob Description – GCP DevOps Design, deploy, and manage Kubernetes clusters using Google Kubernetes Engine (GKE).Develop and maintain Infrastructure as Code (IaC) using Terraform Optimize GKE workloads for performance, reliability, and cost efficiency. Implement multi-region and multi-environment cluster strategies for high availability and DR (Disaster Recovery). Essential Skills – GCP DevOps Design, deploy, and manage Kubernetes clusters using Google Kubernetes Engine (GKE).Develop and maintain Infrastructure as Code (IaC) using Terraform Optimize GKE workloads for performance, reliability, and cost efficiency. Implement multi-region and multi-environment cluster strategies for high availability and DR (Disaster Recovery).

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Digital : DevOps~Digital : Google Cloud

Job Description

SharePoint Manage SharePoint sites, libraries, lists, and permissions to ensure proper access and security. •Monitor and maintain the health and performance of SharePoint sites and services. •Troubleshoot and resolve issues related to SharePoint, Onedrive functionality, performance, and security. •Proficiency in developing workflows using lists, Power Automate, and flows in SharePoint. •Familiarity with Sharegate for migration, management, and reporting tasks. •Develop and maintain documentation, policies, and procedures related to SharePoint administration.2016~ITIS_AMS_Sharepoint Admin

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : SharePoint 2016~ITIS_AMS_Sharepoint Admin

Job Description

Informatica Powercentre They must have CDI and CAI experience, not just the on-premises integration stack 1.Experience developing integrations between SAP ECC and Informatica’s MDM SaaS tool product360, customer360 experience a plus. 2.Experience developing integrations between Informatica’s MDM SaaS Tool to SAP ECC, Snowflake and Databricks. 3.General understanding of Informatica’s MDM SaaS applications. 4.Understanding of CDC (change data capture) and error handling techniques for integrations. 5.Strong communication skills. Expertise in platform and tool administration Hands-on experience of working on a data catalog tool Good understanding of Data Governance and Data Catalog Experienced in support and maintenance Good communication and interpersonal skills Status Health Monitoring Monitor search and discovery scans Break-fixes Troubleshoot and fix issues User Management Grant Revoke access Incident Management Resolution, Communication Problem Management Root cause analysis Status meeting review with Project manager

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Informatica Powercentre

Job Description

Digital : Data Science~Digital : Google Cloud~Digital : Google Data Engineering

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Digital : Data Science~Digital : Google Cloud~Digital : Google Data Engineering

Job Description

ICT Risk & Digital Operational Resilience Strategy

Responsibilities

ICT Risk & Digital Operational Resilience Strategy
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :551076 and 551079_DCG_ ICT Risk & Digital Operational Resilience Strategy