We found 275 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

1. Exposure the basic FICO modules – GL, AP, AR, CCA, IO. Asset accounting experience is also required. 2. Exposure to Testing the various Finance related business process. 3. Good understand of the Integration with MM, SD, PS & ISU modules. 4. Experience range- 4 years to 9 years.

Responsibilities

1. Exposure the basic FICO modules – GL, AP, AR, CCA, IO. Asset accounting experience is also required. 2. Exposure to Testing the various Finance related business process. 3. Good understand of the Integration with MM, SD, PS & ISU modules. 4. Experience range- 4 years to 9 years.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :SAP FICO

Job Description

PL/SQL~Functional Testing Roles & Responsibility A Selenium Automation Tester is responsible for designing and implementing automated tests for web applications. Key responsibilities include: • Developing and executing automated tests using the Selenium framework to ensure software quality. • Collaborating with development teams to identify and resolve issues, and maintaining a database of software defects. • Analyzing test results and tracking metrics to improve testing processes. • Staying updated with industry trends and technologies, and providing suggestions for process improvements. • Possessing skills in programming languages like Java, Python, or C#, and familiarity with test automation tools. • Understand business requirements and create test cases. • Analyze automation results and report defects and work with developers for resolution. • Ensure traceability between requirements and test cases. • Validate UI/UX and workflows and troubleshooting guides. • Design and develop automation scripts using tools like Selenium • Maintain and update automation frameworks. Personal and Organizational Skills • Proactive and Initiative-Driven: Self-motivated with a go-getter attitude, capable of solving complex problems. • Collaborative: Ability to work effectively with QA, product managers, and cross-functional teams. Candidate should be a team player. • Fast Learner: Eager to adapt to new technologies and project requirements. • Good Communicator: Excellent verbal and written communication skills. • Independent: Capable of managing tasks with minimal supervision. • Documentation: Adept at creating architectural diagrams, maintaining detailed documentation in confluence.

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :PL/SQL~Functional Testing

Job Description

• hands-on experience in healthcare interoperability and integration development • Strong understanding of FHIR API development and RESTful services. • knowledge of C-CDA document types and HL7 FHIR standards • Proficiency in scripting languages -Python

Responsibilities

• hands-on experience in healthcare interoperability and integration development • Strong understanding of FHIR API development and RESTful services. • knowledge of C-CDA document types and HL7 FHIR standards • Proficiency in scripting languages -Python
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :HL7 FHIR

Job Description

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.

Responsibilities

Job Title: Developer Work Location: - Hyderabad, TG// Kolkata, WB. Skill Required :- Digital : PySpark~Azure Data Factory Range: 6 to 8 Yrs Job Description:- Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools. Essential Skills:- Role SummaryWe are looking for a Data Engineer with 24 years( more experience is welcomed) of hands-on experience in PySpark, Azure Data Factory (ADF), and Azure-based data pipelines. The ideal candidate should have strong skills in building ETL workflows, working with big-data technologies, and supporting production data processes.Key Responsibilities1. PySpark DevelopmentDevelop and optimize ETL pipelines using PySpark and Spark SQL.Work with DataFrames, transformations, and partitioning.Handle data ingestion from various formats (Parquet, CSV, JSON, Delta).2. Azure Data Factory (ADF)Build and maintain ADF pipelines, datasets, linked services, and triggers.Integrate ADF pipelines with Azure Databricks, ADLS, Azure SQL, APIs, etc.Monitor pipeline runs and troubleshoot failures.3. Azure Cloud DatabricksWork with ADLS Gen2 for data storage and management.Run, schedule, and debug Databricks notebooks.Use Delta Lake for data processing and incremental loads.4. ETL Data ManagementImplement data cleansing, transformations, and validation checks.Follow standard data engineering best practices.Support production jobs and ensure data quality.5. DevOps CollaborationUse Git or Azure DevOps for code versioning.Participate in code reviews and documentation.Collaborate with analysts and data architects for requirements.Required Skills24 years of hands-on experience with PySpark and Spark SQL.Experience building data pipelines in Azure Data Factory.Working knowledge of Azure Databricks ADLS Gen2.Good SQL knowledge.Understanding of ETL concepts and data pipelines.Good to HaveExperience with Delta Lake (MERGE, schema evolution).Familiarity with CICD (Azure DevOpsGitHub).Exposure to Snowflake is a plus.Soft SkillsStrong analytical and problem-solving abilities.Good communication and teamwork skills.Ability to learn quickly and adapt to new tools.
  • Salary : Rs. 90,000.0 - Rs. 1,65,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

Vulnerability Management, Linux admin with shell scripting, deployment exp.

Responsibilities

Vulnerability Management, Linux admin with shell scripting, deployment exp.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :546901_CIS_ Infrastructure Security

Job Description

• Expertise in ML-based recommendation systems and predictive analytics • Hands-on experience with AI marketing tools for multi-domain campaigns Strong understanding of customer lifecycle, personalization, and cross-platform targeting

Responsibilities

• Expertise in ML-based recommendation systems and predictive analytics • Hands-on experience with AI marketing tools for multi-domain campaigns Strong understanding of customer lifecycle, personalization, and cross-platform targeting
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :551330 | IC | Rakuten | ML Engineer

Job Description

Application Lead – C++ ModernizationRole Summary:As an Application Lead specializing in C++ modernization, you will lead efforts to transform legacy scientific and engineering calculation engines (written in C++, C, and Fortran) into scalable, high-performance services. You will collaborate with cross-functional teams to ensure algorithm accuracy, performance optimization, and seamless integration into modern platforms.________________________________________Key Responsibilities:Legacy Modernization & Architecture• Lead the migration and wrapping of legacy computational logic into modern service-based architecture.• Analyze and document legacy algorithms and business logic.• Design scalable, high-performance calculation engines using modern architectural patterns.• Ensure regression-tested accuracy against legacy systems.Technical Leadership• Define architecture and design standards for computational services.• Conduct technical reviews and mentor engineers on algorithm implementation and optimization.• Establish coding and testing best practices for numerical modules.Algorithm Development & Validation• Oversee the development and validation of scientific and numerical algorithms.• Ensure mathematical precision and performance benchmarks are met.• Implement optimization strategies including parallelization and multi-threading.Service Integration & Extensibility• Design APIs/microservices for platform integration.• Collaborate with backend and platform teams to ensure extensibility and reuse.• Implement versioning and audit capabilities for calculation services.Performance Engineering• Profile and optimize memory and compute performance.• Ensure engines meet SLAs for latency and throughput.• Apply parallel computing techniques where applicable.Team Leadership• Lead a team of engineers across onshore/offshore locations.• Drive sprint planning, code reviews, and knowledge sharing.• Foster collaboration and continuous improvement.________________________________________Required Skills & Experience:Technical Skills• Expert in C++, STL, scientific computing, and numerical methods.• Proficient in C, Fortran, .NET, and performance optimization techniques.• Strong understanding of multi-threading, parallel processing, and computational mathematics.• Familiarity with microservices, REST APIs, Docker/Kubernetes, and Azure.Experience• 10+ years in software development, with 7+ years in C++.• 3+ years in technical leadership roles.• Proven experience in legacy code modernization and scientific computing.Soft Skills• Strong analytical and problem-solving abilities.• Excellent communication and stakeholder engagement.• Leadership and mentoring capabilities.• Detail-oriented with a focus on accuracy and performance.

Responsibilities

Application Lead – C++ ModernizationRole Summary:As an Application Lead specializing in C++ modernization, you will lead efforts to transform legacy scientific and engineering calculation engines (written in C++, C, and Fortran) into scalable, high-performance services. You will collaborate with cross-functional teams to ensure algorithm accuracy, performance optimization, and seamless integration into modern platforms.________________________________________Key Responsibilities:Legacy Modernization & Architecture• Lead the migration and wrapping of legacy computational logic into modern service-based architecture.• Analyze and document legacy algorithms and business logic.• Design scalable, high-performance calculation engines using modern architectural patterns.• Ensure regression-tested accuracy against legacy systems.Technical Leadership• Define architecture and design standards for computational services.• Conduct technical reviews and mentor engineers on algorithm implementation and optimization.• Establish coding and testing best practices for numerical modules.Algorithm Development & Validation• Oversee the development and validation of scientific and numerical algorithms.• Ensure mathematical precision and performance benchmarks are met.• Implement optimization strategies including parallelization and multi-threading.Service Integration & Extensibility• Design APIs/microservices for platform integration.• Collaborate with backend and platform teams to ensure extensibility and reuse.• Implement versioning and audit capabilities for calculation services.Performance Engineering• Profile and optimize memory and compute performance.• Ensure engines meet SLAs for latency and throughput.• Apply parallel computing techniques where applicable.Team Leadership• Lead a team of engineers across onshore/offshore locations.• Drive sprint planning, code reviews, and knowledge sharing.• Foster collaboration and continuous improvement.________________________________________Required Skills & Experience:Technical Skills• Expert in C++, STL, scientific computing, and numerical methods.• Proficient in C, Fortran, .NET, and performance optimization techniques.• Strong understanding of multi-threading, parallel processing, and computational mathematics.• Familiarity with microservices, REST APIs, Docker/Kubernetes, and Azure.Experience• 10+ years in software development, with 7+ years in C++.• 3+ years in technical leadership roles.• Proven experience in legacy code modernization and scientific computing.Soft Skills• Strong analytical and problem-solving abilities.• Excellent communication and stakeholder engagement.• Leadership and mentoring capabilities.• Detail-oriented with a focus on accuracy and performance.
  • Salary : Rs. 0.0 - Rs. 1,80,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Application Developer

Job Description

ob Description – GCP DevOps Design, deploy, and manage Kubernetes clusters using Google Kubernetes Engine (GKE).Develop and maintain Infrastructure as Code (IaC) using Terraform Optimize GKE workloads for performance, reliability, and cost efficiency. Implement multi-region and multi-environment cluster strategies for high availability and DR (Disaster Recovery). Essential Skills – GCP DevOps Design, deploy, and manage Kubernetes clusters using Google Kubernetes Engine (GKE).Develop and maintain Infrastructure as Code (IaC) using Terraform Optimize GKE workloads for performance, reliability, and cost efficiency. Implement multi-region and multi-environment cluster strategies for high availability and DR (Disaster Recovery).

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Digital : DevOps~Digital : Google Cloud

Job Description

SharePoint Manage SharePoint sites, libraries, lists, and permissions to ensure proper access and security. •Monitor and maintain the health and performance of SharePoint sites and services. •Troubleshoot and resolve issues related to SharePoint, Onedrive functionality, performance, and security. •Proficiency in developing workflows using lists, Power Automate, and flows in SharePoint. •Familiarity with Sharegate for migration, management, and reporting tasks. •Develop and maintain documentation, policies, and procedures related to SharePoint administration.2016~ITIS_AMS_Sharepoint Admin

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : SharePoint 2016~ITIS_AMS_Sharepoint Admin