8+ years – JAVA Full Stack with Angular, Springboot
Developer who is highly skilled in the
delivery of high quality, low defect code for use in large enterprise applications. Must be able to take stories and designs and transform into function code. Must have strong knowledge of design patterns, and best practices. Must be familiar with Safe Agile and be available to work US CT business hours..
Experience in Telecom Domain will be strong plus
Should have good communication skills
Ensure compliance to processes set by clients
Ability to work in a collaborative multi-application team environment.
Experience and desire to work in a Global delivery environment
Responsibilities
8+ years – JAVA Full Stack with Angular, Springboot
Developer who is highly skilled in the
delivery of high quality, low defect code for use in large enterprise applications. Must be able to take stories and designs and transform into function code. Must have strong knowledge of design patterns, and best practices. Must be familiar with Safe Agile and be available to work US CT business hours..
Experience in Telecom Domain will be strong plus
Should have good communication skills
Ensure compliance to processes set by clients
Ability to work in a collaborative multi-application team environment.
Experience and desire to work in a Global delivery environment
Salary : Rs. 10,00,000.0 - Rs. 20,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Skill 1 :
Level -SA& M
Location – Bangalore & Chennai
Notice – Immediate
Primary mandate skill required – GCP Cloud Infra and Solutions, Terraform, Platform CI/CD pipelines, GoogleCloud Resouce Deployment
Secondary mandate skill required – Scripting (Python, Bash),
Open to look at CWR’s – Yes
Flexible to hire in any location – Any Cognizant location
Detailed Job Description –
Senior Cloud Engineer to build and automate the core infrastructure for data and AI/ML platform on Google Cloud.
You will partner with a Senior Data Engineer to create a scalable, secure, and reliable foundation for services like BigQuery, Dataflow, and AI Platform.
Primary Responsibilities:
Design, build, and manage robust GCP infrastructure using Infrastructure as Code (Terraform, Deployment Manager).
Develop and maintain sophisticated CI/CD pipelines for the data platform using Cloud Build and Spinnaker.
Implement and optimize GCP Cloud Storage solutions to support massive datasets.
Automate infrastructure and operational workflows using Python and Bash.
Collaborate closely with data engineers to ensure seamless deployment and operation of data processing and machine learning workloads.
Required Skills:
Deep expertise in GCP infrastructure services.
Proven hands-on experience with IaC (Terraform strongly preferred).
Strong background in building and managing CI/CD pipelines (Cloud Build, Spinnaker).
Proficient scripting skills in Python and/or Bash.
Experience supporting large-scale data and analytics platforms is a major plus.
Relevant certifications (e.g., Google Cloud Certified - Professional Cloud DevOps Engineer, Professional Cloud Architect) are highly desirable.
Skill 2:
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
• Secondary mandate skill required – Monitoring Cloud Monitoring/ Cloud Logging, Anthos Service Mesh
• Flexible to hire in any location – Any Cogn
Level – SA/M
Notice – 30 Days
Detailed Job Description –
Kubernetes Platform: Hands-on experience with Google Kubernetes Engine (GKE).
Infrastructure as Code (IaC) & CI/CD: Very strong expertise in Terraform, Helm, Cloud Build, Cloud Source Repositories, and Artifact Registry.
GitOps: Strong experience in managing and architecting GitOps CI/CD workflows using Config Sync.
Monitoring & Observability: Strong proficiency with Google Cloud's operations suite, including Cloud Monitoring for creating and customizing metrics/dashboards, and Cloud Logging for log analysis.
Service Mesh: Development and working experience with Anthos Service Mesh lifecycle management, including configuring and troubleshooting applications deployed within the mesh.
Skill 3
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
Secondary mandate skill required – Monitoring third-party solutions such as Prometheus and Grafana.
Level – SA/M
Flexible to hire in any location – yes pan India Shift time : 2 pm – 11 pm.
Notice – 30 Days
JD and Role Overview
We are seeking a highly skilled and experienced Senior Cloud and DevOps Engineer to join our team. The ideal candidate will have deep expertise in cloud infrastructure, DevOps practices, and automation, with a strong focus on Google Cloud Platform (GCP) and preferably Microsoft Azure. This role involves designing, implementing, monitoring, and maintaining scalable cloud environments and CI/CD pipelines to support mission-critical applications.
Key Responsibilities
Develop and maintain Infrastructure as Code (IaC) using Terraform.
Design and implement secure cloud networking solutions, including firewalls, WAFs, ingress/egress rules, and SSL termination.
Architect, deploy, and manage Kubernetes clusters, including setup of application components, ingress controllers, secrets management, and lifecycle maintenance.
Build and optimize CI/CD pipelines using GitHub Actions.
Monitor cloud infrastructure using native tools and third-party solutions such as Prometheus and Grafana.
Collaborate with cross-functional teams to support deployments and troubleshoot issues.
Participate in early morning or late evening calls to support global deployments and incident resolution whenever needed.
Required Qualifications
Proven experience with Google Cloud Platform (GCP); Azure experience is a strong plus.
Deep understanding of cloud infrastructure, networking concepts and security configurations.
Strong hands-on experience with Kubernetes cluster setup and management.
Proficiency in Terraform and GitHub Actions for automation and CI/CD.
Experience with cloud monitoring tools including Prometheus, Grafana, and native cloud solutions.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Flexibility to work outside standard hours to support global operations.
Responsibilities
Skill 1 :
Level -SA& M
Location – Bangalore & Chennai
Notice – Immediate
Primary mandate skill required – GCP Cloud Infra and Solutions, Terraform, Platform CI/CD pipelines, GoogleCloud Resouce Deployment
Secondary mandate skill required – Scripting (Python, Bash),
Open to look at CWR’s – Yes
Flexible to hire in any location – Any Cognizant location
Detailed Job Description –
Senior Cloud Engineer to build and automate the core infrastructure for data and AI/ML platform on Google Cloud.
You will partner with a Senior Data Engineer to create a scalable, secure, and reliable foundation for services like BigQuery, Dataflow, and AI Platform.
Primary Responsibilities:
Design, build, and manage robust GCP infrastructure using Infrastructure as Code (Terraform, Deployment Manager).
Develop and maintain sophisticated CI/CD pipelines for the data platform using Cloud Build and Spinnaker.
Implement and optimize GCP Cloud Storage solutions to support massive datasets.
Automate infrastructure and operational workflows using Python and Bash.
Collaborate closely with data engineers to ensure seamless deployment and operation of data processing and machine learning workloads.
Required Skills:
Deep expertise in GCP infrastructure services.
Proven hands-on experience with IaC (Terraform strongly preferred).
Strong background in building and managing CI/CD pipelines (Cloud Build, Spinnaker).
Proficient scripting skills in Python and/or Bash.
Experience supporting large-scale data and analytics platforms is a major plus.
Relevant certifications (e.g., Google Cloud Certified - Professional Cloud DevOps Engineer, Professional Cloud Architect) are highly desirable.
Skill 2:
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
• Secondary mandate skill required – Monitoring Cloud Monitoring/ Cloud Logging, Anthos Service Mesh
• Flexible to hire in any location – Any Cogn
Level – SA/M
Notice – 30 Days
Detailed Job Description –
Kubernetes Platform: Hands-on experience with Google Kubernetes Engine (GKE).
Infrastructure as Code (IaC) & CI/CD: Very strong expertise in Terraform, Helm, Cloud Build, Cloud Source Repositories, and Artifact Registry.
GitOps: Strong experience in managing and architecting GitOps CI/CD workflows using Config Sync.
Monitoring & Observability: Strong proficiency with Google Cloud's operations suite, including Cloud Monitoring for creating and customizing metrics/dashboards, and Cloud Logging for log analysis.
Service Mesh: Development and working experience with Anthos Service Mesh lifecycle management, including configuring and troubleshooting applications deployed within the mesh.
Skill 3
Primary mandate skill required – GCP Infra, Infrastructure as Code (IaC) using Terraform, DevOps, CI/CD pipelines, GKE
Secondary mandate skill required – Monitoring third-party solutions such as Prometheus and Grafana.
Level – SA/M
Flexible to hire in any location – yes pan India Shift time : 2 pm – 11 pm.
Notice – 30 Days
JD and Role Overview
We are seeking a highly skilled and experienced Senior Cloud and DevOps Engineer to join our team. The ideal candidate will have deep expertise in cloud infrastructure, DevOps practices, and automation, with a strong focus on Google Cloud Platform (GCP) and preferably Microsoft Azure. This role involves designing, implementing, monitoring, and maintaining scalable cloud environments and CI/CD pipelines to support mission-critical applications.
Key Responsibilities
Develop and maintain Infrastructure as Code (IaC) using Terraform.
Design and implement secure cloud networking solutions, including firewalls, WAFs, ingress/egress rules, and SSL termination.
Architect, deploy, and manage Kubernetes clusters, including setup of application components, ingress controllers, secrets management, and lifecycle maintenance.
Build and optimize CI/CD pipelines using GitHub Actions.
Monitor cloud infrastructure using native tools and third-party solutions such as Prometheus and Grafana.
Collaborate with cross-functional teams to support deployments and troubleshoot issues.
Participate in early morning or late evening calls to support global deployments and incident resolution whenever needed.
Required Qualifications
Proven experience with Google Cloud Platform (GCP); Azure experience is a strong plus.
Deep understanding of cloud infrastructure, networking concepts and security configurations.
Strong hands-on experience with Kubernetes cluster setup and management.
Proficiency in Terraform and GitHub Actions for automation and CI/CD.
Experience with cloud monitoring tools including Prometheus, Grafana, and native cloud solutions.
Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Flexibility to work outside standard hours to support global operations.
Salary : Rs. 10,00,000.0 - Rs. 20,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
EssentialSkills:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making. A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
Desirable Skills:
A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making. A Big Data Engineer is an IT professional specializing in the design, development, and maintenance of systems that handle and process large, complex datasets (big data). They are responsible for building the infrastructure and pipelines that allow organizations to collect, store, process, and analyze vast amounts of data, often in the terabyte and petabyte range. Key Responsibilities of a Big Data Engineer: Designing and Building Data Infrastructure: This includes creating the architecture and systems for storing and processing massive amounts of data. Data Collection and Processing: Big Data Engineers gather data from various sources, clean and transform it, and prepare it for analysis. Developing Data Pipelines: They build pipelines to move data from source systems to data storage and processing systems. Ensuring Data Quality and Integrity: Big Data Engineers implement processes to ensure the accuracy and reliability of the data. Optimizing Performance: They work to improve the efficiency and scalability of data storage and processing systems. Collaboration with Other Teams: Big Data Engineers collaborate with data scientists, analysts, and other teams to support data-driven decision-making.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description: Security Administrator (Offshore)
Security Configuration and Maintenance: Configure and maintain security groups, domain, and business process security policies to ensure robust access control.
• Account Lifecycle Management: Oversee the creation, updating, and deactivation of user accounts, ensuring accurate and secure account management.
• Incident Management: Provide Level 2 support by managing and resolving security-related incidents efficiently.
• Emergency Access Management: Administer and monitor emergency access (Break Glass) procedures to ensure secure and controlled access during critical situations.
• Compliance and Training Support: Assist in compliance reviews and facilitate security training to promote adherence to security policies and standards.
• Access Management for Lower Environments: Manage and oversee access to lower environments, such as sandbox and implementation tenants, ensuring appropriate access controls are in place.
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Responsibilities
1.Deeply understand corporate business processes, especially in the application scenarios of D365 system, with a focus on integrating knowledge of post sales service business and processes. Through methods such as interviews and business observations, comprehensively collect requirements from business departments, accurately identify business pain points, clarify business objectives, and create detailed and professional D365 business requirement documents, providing a solid foundation for system configuration, customization, and development. Leverage your post sales service knowledge to engage with customers during the post sales phase, gather feedback on system usage, and ensure continuous improvement of the D365 solution based on real - world post sales experiences.
2.Conduct in - depth analysis of business data generated by the enterprise based on the D365 system. Utilize data analysis tools to mine data value, combine industry data with the enterprises actual situation, identify potential problems and optimization directions in business operations, and provide data - driven support for business decisions and D365 system function optimization. Apply your understanding of post sales service processes to analyse data related to customer support requests, system issues, and user satisfaction, identify areas for enhancement, and inform post - implementation improvements from a post sales service perspective.
3.Participate in the review and optimization of enterprise business processes. Relying on the D365 system and your expertise in post-sales service business and processes, analyze the adaptability of existing processes to system functions, discover process bottlenecks and redundant links. Propose process improvement plans that integrate D365 system features, industry best practices, and post sales service requirements, enhancing business operation efficiency, system utilization effectiveness, and post sales service quality. Collaborate with the post sales team, using your process knowledge to understand customer pain points during system operation and incorporate these insights into process optimization efforts.
4.Collaborate closely with the D365 development team. Translate business requirements accurately into functional requirements that can be implemented by the system and assist technical personnel in understanding business logic. Act as a bridge between business and technology during system development, configuration, and customization, promptly resolve disagreements and issues between the two parties, ensuring that projects progress smoothly and meet business expectations. During post sales, work with developers to troubleshoot system - related problems reported by customers, using your post sales service knowledge to translate customer concerns into actionable technical tasks and align solutions with post sales service needs.
5.Regularly review and evaluate the application effect of the D365 system and the achievements of business analysis. Collect feedback from business departments, continuously optimize business analysis methods and processes based on the D365 system, promote a deep fit between system functions and business requirements, and enhance the enterprises digital operation level. Specifically, focus on post sales feedback, leveraging your post sales service business and process knowledge to drive iterative improvements, ensuring the long - term success and satis
Responsibilities
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Azure Data Factory, SQL- Advance, SSIS, Data Bricks, SDFS
• Azure Data Factory (ADF) pipelines and Polybase
• Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks
• Can work on streaming analytics and various data services in Azure like Data flow, etc
• Ability to develop extensible, testable and maintainable code
• Good understanding of the challenges of enterprise software development
• Track record of delivering high volume, low latency distributed software solutions
• Experience of working in Agile teams
• Experience of the full software development lifecycle including analysis, design, implementation, testing and support
• Experience of mentoring more junior developers and directing/organizing the work of team
• Good know how of SDFS, Azure Data Factory (ADF) pipelines and Polybase
• Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks
• Can work on streaming analytics and various data services in Azure like Data flow, etc.
• Client facing, Technical Role, Assertive, Team Member skills.
Good-to-Have:
• Experience of Datawarehouse applications
• Experience in TTH Domain Projects
• Knowledge of Azure DevOps is desirable.
• Knowledge of CI/CD and DevOps practices is an advantage.
Responsibilities
Azure Data Factory, SQL- Advance, SSIS, Data Bricks, SDFS
• Azure Data Factory (ADF) pipelines and Polybase
• Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks
• Can work on streaming analytics and various data services in Azure like Data flow, etc
• Ability to develop extensible, testable and maintainable code
• Good understanding of the challenges of enterprise software development
• Track record of delivering high volume, low latency distributed software solutions
• Experience of working in Agile teams
• Experience of the full software development lifecycle including analysis, design, implementation, testing and support
• Experience of mentoring more junior developers and directing/organizing the work of team
• Good know how of SDFS, Azure Data Factory (ADF) pipelines and Polybase
• Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks
• Can work on streaming analytics and various data services in Azure like Data flow, etc.
• Client facing, Technical Role, Assertive, Team Member skills.
Good-to-Have:
• Experience of Datawarehouse applications
• Experience in TTH Domain Projects
• Knowledge of Azure DevOps is desirable.
• Knowledge of CI/CD and DevOps practices is an advantage.
Salary : Rs. 0.0 - Rs. 15.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance
Job Description:
Must-Have
1.Strong understanding of REFramework, state machines, and transaction-based processing.
2.Strong UiPath development experience (minimum 3-5 years).
3.Work with business analysts to translate functional requirements into technical architecture and convert that into code.
4.Design and optimize Orchestrator architecture, multi-tenant environments, and disaster recovery strategies.
5.Build robust, reusable components following best practices and frameworks (e.g., REFramework).
6.Debug and resolve production issues, ensuring minimal downtime.
7.Implement Orchestrator-based automation scheduling, asset management, and queue handling.
8.Document workflows, maintain version control, and ensure compliance with security policies.
9.Ability to develop complex data handling logic, including LINQ queries, data tables, and JSONXML parsing.
10.Experience in error handling and retry strategies for stable automation.
11.Hands-on experience with Queues, Triggers, Assets, Business Rules in Orchestrator.
12.Knowledge of selectors, dynamic selectors, and UI automation best practices.
13.Guide and mentor development team for efficient and standardized development.
14.Ability to conduct peer code reviews and enforce best practices.
Responsibilities
Job Description:
Must-Have
1.Strong understanding of REFramework, state machines, and transaction-based processing.
2.Strong UiPath development experience (minimum 3-5 years).
3.Work with business analysts to translate functional requirements into technical architecture and convert that into code.
4.Design and optimize Orchestrator architecture, multi-tenant environments, and disaster recovery strategies.
5.Build robust, reusable components following best practices and frameworks (e.g., REFramework).
6.Debug and resolve production issues, ensuring minimal downtime.
7.Implement Orchestrator-based automation scheduling, asset management, and queue handling.
8.Document workflows, maintain version control, and ensure compliance with security policies.
9.Ability to develop complex data handling logic, including LINQ queries, data tables, and JSONXML parsing.
10.Experience in error handling and retry strategies for stable automation.
11.Hands-on experience with Queues, Triggers, Assets, Business Rules in Orchestrator.
12.Knowledge of selectors, dynamic selectors, and UI automation best practices.
13.Guide and mentor development team for efficient and standardized development.
14.Ability to conduct peer code reviews and enforce best practices.
Salary : Rs. 0.0 - Rs. 9,00,000.0
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance