• SAP FI (GL, AP, AR, Fixed Accounting Year End/Periodic activities, IC, Banking)
• SAP CO (Cost Element Accounting, Cost Center Accounting, Profit Center Accounting, Product Costing, COPA)
• Candidate should be thorough and hands on currently and should be able to provide solutions and address issues independently
•Utilize technical and domain knowledge to develop and implement effective solutions; provide hands on mentoring to team members through all phases of the Systems Development Life Cycle (SDLC)
• Candidate should be thorough and hands on currently and should be able to provide solutions and address issues independently
• Analyze existing systems supporting Quality Business processes and make recommendations for system improvement
• Work with the stakeholders to gather and organize business requirements
• Define solution options for solving particular business problem
• Define solution options for solving particular business problem
Responsibilities
• SAP FI (GL, AP, AR, Fixed Accounting Year End/Periodic activities, IC, Banking)
• SAP CO (Cost Element Accounting, Cost Center Accounting, Profit Center Accounting, Product Costing, COPA)
• Candidate should be thorough and hands on currently and should be able to provide solutions and address issues independently
•Utilize technical and domain knowledge to develop and implement effective solutions; provide hands on mentoring to team members through all phases of the Systems Development Life Cycle (SDLC)
• Candidate should be thorough and hands on currently and should be able to provide solutions and address issues independently
• Analyze existing systems supporting Quality Business processes and make recommendations for system improvement
• Work with the stakeholders to gather and organize business requirements
• Define solution options for solving particular business problem
• Define solution options for solving particular business problem
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Minimum 8+ years of experience working as Snowflake, AWS, and Big Data Developer
Key Skills required
Snowflake – Snowpark
Apache Iceberg
AWS services – Glue, Cloud formation, Lake Formation, IAM, Lambda, Redshift
Coding Languages – Python, SQL
Bigdata technologies – PySpark, Spark
• Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives.
• Understand project requirements and translate them into technical solutions which meets the project quality standards
• Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues.
• Strong problem solving and Good Analytical skills.
• Excellent verbal and written communication skills.
• Experience and desire to work in a Global delivery environment.
• Stay up to date with new technologies and industry trends in Development
Responsibilities
Minimum 8+ years of experience working as Snowflake, AWS, and Big Data Developer
Key Skills required
Snowflake – Snowpark
Apache Iceberg
AWS services – Glue, Cloud formation, Lake Formation, IAM, Lambda, Redshift
Coding Languages – Python, SQL
Bigdata technologies – PySpark, Spark
• Work in an Agile environment and participation in scrum daily standups, sprint planning reviews and retrospectives.
• Understand project requirements and translate them into technical solutions which meets the project quality standards
• Ability to work in team in diverse/multiple stakeholder environment and collaborate with upstream/downstream functional teams to identify, troubleshoot and resolve data issues.
• Strong problem solving and Good Analytical skills.
• Excellent verbal and written communication skills.
• Experience and desire to work in a Global delivery environment.
• Stay up to date with new technologies and industry trends in Development
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Role Category :Programming & Design
Role :Snowflake with AWS and Pyspark Senior Developer
Skill 1 :
Level -SA& M
Location – Bangalore & Chennai
Notice – Immediate
Primary mandate skill required – GCP Cloud Infra and Solutions, Terraform, Platform CI/CD pipelines, GoogleCloud Resouce Deployment
Secondary mandate skill required – Scripting (Python, Bash),
Open to look at CWR’s – Yes
Flexible to hire in any location – Any Cognizant location
Detailed Job Description –
Senior Cloud Engineer to build and automate the core infrastructure for data and AI/ML platform on Google Cloud.
You will partner with a Senior Data Engineer to create a scalable, secure, and reliable foundation for services like BigQuery, Dataflow, and AI Platform.
Primary Responsibilities:
Design, build, and manage robust GCP infrastructure using Infrastructure as Code (Terraform, Deployment Manager).
Develop and maintain sophisticated CI/CD pipelines for the data platform using Cloud Build and Spinnaker.
Implement and optimize GCP Cloud Storage solutions to support massive datasets.
Automate infrastructure and operational workflows using Python and Bash.
Collaborate closely with data engineers to ensure seamless deployment and operation of data processing and machine learning workloads.
Required Skills:
Deep expertise in GCP infrastructure services.
Proven hands-on experience with IaC (Terraform strongly preferred).
Strong background in building and managing CI/CD pipelines (Cloud Build, Spinnaker).
Proficient scripting skills in Python and/or Bash.
Experience supporting large-scale data and analytics platforms is a major plus.
Relevant certifications (e.g., Google Cloud Certified - Professional Cloud DevOps Engineer, Professional Cloud Architect) are highly desirable.
Skill 2:
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
• Secondary mandate skill required – Monitoring Cloud Monitoring/ Cloud Logging, Anthos Service Mesh
• Flexible to hire in any location – Any Cogn
Level – SA/M
Notice – 30 Days
Detailed Job Description –
Kubernetes Platform: Hands-on experience with Google Kubernetes Engine (GKE).
Infrastructure as Code (IaC) & CI/CD: Very strong expertise in Terraform, Helm, Cloud Build, Cloud Source Repositories, and Artifact Registry.
GitOps: Strong experience in managing and architecting GitOps CI/CD workflows using Config Sync.
Monitoring & Observability: Strong proficiency with Google Cloud's operations suite, including Cloud Monitoring for creating and customizing metrics/dashboards, and Cloud Logging for log analysis.
Service Mesh: Development and working experience with Anthos Service Mesh lifecycle management, including configuring and troubleshooting applications deployed within the mesh.
Skill 3
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
Secondary mandate skill required – Monitoring third-party solutions such as Prometheus and Grafana.
Level – SA/M
Flexible to hire in any location – yes pan India Shift time : 2 pm – 11 pm.
Notice – 30 Days
JD and Role Overview
We are seeking a highly skilled and experienced Senior Cloud and DevOps Engineer to join our team. The ideal candidate will have deep expertise in cloud infrastructure, DevOps practices, and automation, with a strong focus on Google Cloud Platform (GCP) and preferably Microsoft Azure. This role involves designing, implementing, monitoring, and maintaining scalable cloud environments and CI/CD pipelines to support mission-critical applications.
Key Responsibilities
Develop and maintain Infrastructure as Code (IaC) using Terraform.
Design and implement secure cloud networking solutions, including firewalls, WAFs, ingress/egress rules, and SSL termination.
Architect, deploy, and manage Kubernetes clusters, including setup of application components, ingress controllers, secrets management, and lifecycle maintenance.
Build and optimize CI/CD pipelines using GitHub Actions.
Monitor cloud infrastructure using native tools and third-party solutions such as Prometheus and Grafana.
Collaborate with cross-functional teams to support deployments and troubleshoot issues.
Participate in early morning or late evening calls to support global deployments and incident resolution whenever needed.
Required Qualifications
Proven experience with Google Cloud Platform (GCP); Azure experience is a strong plus.
Deep understanding of cloud infrastructure, networking concepts and security configurations.
Strong hands-on experience with Kubernetes cluster setup and management.
Proficiency in Terraform and GitHub Actions for automation and CI/CD.
Experience with cloud monitoring tools including Prometheus, Grafana, and native cloud solutions.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Flexibility to work outside standard hours to support global operations.
Responsibilities
Skill 1 :
Level -SA& M
Location – Bangalore & Chennai
Notice – Immediate
Primary mandate skill required – GCP Cloud Infra and Solutions, Terraform, Platform CI/CD pipelines, GoogleCloud Resouce Deployment
Secondary mandate skill required – Scripting (Python, Bash),
Open to look at CWR’s – Yes
Flexible to hire in any location – Any Cognizant location
Detailed Job Description –
Senior Cloud Engineer to build and automate the core infrastructure for data and AI/ML platform on Google Cloud.
You will partner with a Senior Data Engineer to create a scalable, secure, and reliable foundation for services like BigQuery, Dataflow, and AI Platform.
Primary Responsibilities:
Design, build, and manage robust GCP infrastructure using Infrastructure as Code (Terraform, Deployment Manager).
Develop and maintain sophisticated CI/CD pipelines for the data platform using Cloud Build and Spinnaker.
Implement and optimize GCP Cloud Storage solutions to support massive datasets.
Automate infrastructure and operational workflows using Python and Bash.
Collaborate closely with data engineers to ensure seamless deployment and operation of data processing and machine learning workloads.
Required Skills:
Deep expertise in GCP infrastructure services.
Proven hands-on experience with IaC (Terraform strongly preferred).
Strong background in building and managing CI/CD pipelines (Cloud Build, Spinnaker).
Proficient scripting skills in Python and/or Bash.
Experience supporting large-scale data and analytics platforms is a major plus.
Relevant certifications (e.g., Google Cloud Certified - Professional Cloud DevOps Engineer, Professional Cloud Architect) are highly desirable.
Skill 2:
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
• Secondary mandate skill required – Monitoring Cloud Monitoring/ Cloud Logging, Anthos Service Mesh
• Flexible to hire in any location – Any Cogn
Level – SA/M
Notice – 30 Days
Detailed Job Description –
Kubernetes Platform: Hands-on experience with Google Kubernetes Engine (GKE).
Infrastructure as Code (IaC) & CI/CD: Very strong expertise in Terraform, Helm, Cloud Build, Cloud Source Repositories, and Artifact Registry.
GitOps: Strong experience in managing and architecting GitOps CI/CD workflows using Config Sync.
Monitoring & Observability: Strong proficiency with Google Cloud's operations suite, including Cloud Monitoring for creating and customizing metrics/dashboards, and Cloud Logging for log analysis.
Service Mesh: Development and working experience with Anthos Service Mesh lifecycle management, including configuring and troubleshooting applications deployed within the mesh.
Skill 3
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
Secondary mandate skill required – Monitoring third-party solutions such as Prometheus and Grafana.
Level – SA/M
Flexible to hire in any location – yes pan India Shift time : 2 pm – 11 pm.
Notice – 30 Days
JD and Role Overview
We are seeking a highly skilled and experienced Senior Cloud and DevOps Engineer to join our team. The ideal candidate will have deep expertise in cloud infrastructure, DevOps practices, and automation, with a strong focus on Google Cloud Platform (GCP) and preferably Microsoft Azure. This role involves designing, implementing, monitoring, and maintaining scalable cloud environments and CI/CD pipelines to support mission-critical applications.
Key Responsibilities
Develop and maintain Infrastructure as Code (IaC) using Terraform.
Design and implement secure cloud networking solutions, including firewalls, WAFs, ingress/egress rules, and SSL termination.
Architect, deploy, and manage Kubernetes clusters, including setup of application components, ingress controllers, secrets management, and lifecycle maintenance.
Build and optimize CI/CD pipelines using GitHub Actions.
Monitor cloud infrastructure using native tools and third-party solutions such as Prometheus and Grafana.
Collaborate with cross-functional teams to support deployments and troubleshoot issues.
Participate in early morning or late evening calls to support global deployments and incident resolution whenever needed.
Required Qualifications
Proven experience with Google Cloud Platform (GCP); Azure experience is a strong plus.
Deep understanding of cloud infrastructure, networking concepts and security configurations.
Strong hands-on experience with Kubernetes cluster setup and management.
Proficiency in Terraform and GitHub Actions for automation and CI/CD.
Experience with cloud monitoring tools including Prometheus, Grafana, and native cloud solutions.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Flexibility to work outside standard hours to support global operations.
Salary : Rs. 10,00,000.0 - Rs. 20,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
EssentialSkills:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making. A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
Desirable Skills:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making. A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description: Security Administrator (Offshore)
Security Configuration and Maintenance: Configure and maintain security groups, domain, and business process security policies to ensure robust access control.
• Account Lifecycle Management: Oversee the creation, updating, and deactivation of user accounts, ensuring accurate and secure account management.
• Incident Management: Provide Level 2 support by managing and resolving security-related incidents efficiently.
• Emergency Access Management: Administer and monitor emergency access (Break Glass) procedures to ensure secure and controlled access during critical situations.
• Compliance and Training Support: Assist in compliance reviews and facilitate security training to promote adherence to security policies and standards.
• Access Management for Lower Environments: Manage and oversee access to lower environments, such as sandbox and implementation tenants, ensuring appropriate access controls are in place.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance