We found 919 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Administer IICS orgs, environments (DEV/QA/UAT/PROD), and Secure Agents Manage users, roles, and access control Execute deployments across environments and support release cycles Monitor jobs and platform health Coordinate planned/unplanned downtime notifications Work with Informatica Support on tickets, logs, RCA, and resolutions Participate in platform discussions, upgrades, and performance tuning Maintain admin documentation, SOPs, and runbooks Act as a liaison between dev, infra/cloud teams, and Informatica

Responsibilities

Administer IICS orgs, environments (DEV/QA/UAT/PROD), and Secure Agents Manage users, roles, and access control Execute deployments across environments and support release cycles Monitor jobs and platform health Coordinate planned/unplanned downtime notifications Work with Informatica Support on tickets, logs, RCA, and resolutions Participate in platform discussions, upgrades, and performance tuning Maintain admin documentation, SOPs, and runbooks Act as a liaison between dev, infra/cloud teams, and Informatica
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :IICS Administrator – Consultant (Z2 Category)

Job Description

Job Title : IICS Solution Designer – Manager (Salesforce Integrations) – Z3 Category Experience : 6–9 / 10 Years Role Summary Lead solution design for IICS-based integrations, with primary focus on Salesforce integrations. Responsible for defining scalable, high-performance integration solutions, providing ad-hoc design guidance, and participating in architecture and solutioning discussions. Key Responsibilities Own end-to-end solution design using Informatica IICS / IDMC Design Salesforce inbound/outbound integrations (batch & near real-time) Account for Salesforce dependent objects, relationships, API limits, and data constraints Design integrations between Salesforce, Oracle, and other enterprise systems Provide ad-hoc solutioning during escalations and critical business needs Review designs, guide implementation teams, and ensure design adherence Participate in solutioning, architecture, and platform discussions Required Skills & Experience Strong experience with Informatica IICS / IDMC solution design, including CDI, CAI Proven expertise in Salesforce integrations and data model Solid understanding of Oracle databases and enterprise integrations Experience with REST/API integrations and error handling patterns Role Level Expectation Owns integration solution design Acts as go-to person for Salesforce–IICS solutioning Provides technical direction without being a pure developer

Responsibilities

Job Title : IICS Solution Designer – Manager (Salesforce Integrations) – Z3 Category Experience : 6–9 / 10 Years Role Summary Lead solution design for IICS-based integrations, with primary focus on Salesforce integrations. Responsible for defining scalable, high-performance integration solutions, providing ad-hoc design guidance, and participating in architecture and solutioning discussions. Key Responsibilities Own end-to-end solution design using Informatica IICS / IDMC Design Salesforce inbound/outbound integrations (batch & near real-time) Account for Salesforce dependent objects, relationships, API limits, and data constraints Design integrations between Salesforce, Oracle, and other enterprise systems Provide ad-hoc solutioning during escalations and critical business needs Review designs, guide implementation teams, and ensure design adherence Participate in solutioning, architecture, and platform discussions Required Skills & Experience Strong experience with Informatica IICS / IDMC solution design, including CDI, CAI Proven expertise in Salesforce integrations and data model Solid understanding of Oracle databases and enterprise integrations Experience with REST/API integrations and error handling patterns Role Level Expectation Owns integration solution design Acts as go-to person for Salesforce–IICS solutioning Provides technical direction without being a pure developer
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :IICS Solution Designer – Manager (Salesforce Integrations)

Job Description

Functional Tester- 2 positions:- Location- PAN India. JD:- Functional Tester – Pricing & Contracting Transformation Role Overview Responsible for validating end to end business functionality across pricing models, contracting transformation initiatives, and commercial operations platforms, ensuring accurate pricing, compliance, and seamless process adoption during transformation programs. Key Responsibilities • Validate pricing logic, margin calculations (OM%), and discounting rules across systems. • Perform functional testing for contracting and lifecycle transformation scenarios. • Test end to end workflows covering demand creation, pricing approval, contractor onboarding, billing, and revenue recognition. • Translate business requirements into detailed test scenarios, test cases, and traceability matrices. • Execute system, integration, regression, and UAT support testing for pricing and contractor modules. • Validate integration touchpoints with Finance, HR, Procurement, Vendor Management, and Billing systems. • Identify defects, assess business impact, and work with IT and business teams for resolution. • Support transformation releases, data migrations, and change requests with strong business validation. • Ensure compliance with rate cards, commercial governance, and audit requirements. Required Skills & Experience • Strong experience in functional testing for pricing, commercial ops, or workforce management systems. • Solid understanding of pricing constructs (rate cards, markups, margins, cost models). • Ability to work with BRDs, FRDs, user stories, and acceptance criteria. • Exposure to UAT coordination, defect management tools, and test documentation. • Strong stakeholder communication and cross functional collaboration skills. Nice to Have • Experience in large-scale transformation programs (pricing, operating model). • Exposure to life sciences, consulting, or professional services commercial models. • Knowledge of tools like Planisware, SAP, Coupa, Fieldglass, VMS, or custom pricing platforms.

Responsibilities

Functional Tester- 2 positions:- Location- PAN India. JD:- Functional Tester – Pricing & Contracting Transformation Role Overview Responsible for validating end to end business functionality across pricing models, contracting transformation initiatives, and commercial operations platforms, ensuring accurate pricing, compliance, and seamless process adoption during transformation programs. Key Responsibilities • Validate pricing logic, margin calculations (OM%), and discounting rules across systems. • Perform functional testing for contracting and lifecycle transformation scenarios. • Test end to end workflows covering demand creation, pricing approval, contractor onboarding, billing, and revenue recognition. • Translate business requirements into detailed test scenarios, test cases, and traceability matrices. • Execute system, integration, regression, and UAT support testing for pricing and contractor modules. • Validate integration touchpoints with Finance, HR, Procurement, Vendor Management, and Billing systems. • Identify defects, assess business impact, and work with IT and business teams for resolution. • Support transformation releases, data migrations, and change requests with strong business validation. • Ensure compliance with rate cards, commercial governance, and audit requirements. Required Skills & Experience • Strong experience in functional testing for pricing, commercial ops, or workforce management systems. • Solid understanding of pricing constructs (rate cards, markups, margins, cost models). • Ability to work with BRDs, FRDs, user stories, and acceptance criteria. • Exposure to UAT coordination, defect management tools, and test documentation. • Strong stakeholder communication and cross functional collaboration skills. Nice to Have • Experience in large-scale transformation programs (pricing, operating model). • Exposure to life sciences, consulting, or professional services commercial models. • Knowledge of tools like Planisware, SAP, Coupa, Fieldglass, VMS, or custom pricing platforms.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Functional Tester

Job Description

Role Summary Hands-on IICS Administrator responsible for day-to-day platform administration, deployments, monitoring, incident coordination, and vendor interactions to ensure stable and high-performing IICS environments. Key Responsibilities Administer IICS orgs, environments (DEV/QA/UAT/PROD), and Secure Agents Manage users, roles, and access control Execute deployments across environments and support release cycles Monitor jobs and platform health Coordinate planned/unplanned downtime notifications Work with Informatica Support on tickets, logs, RCA, and resolutions Participate in platform discussions, upgrades, and performance tuning Maintain admin documentation, SOPs, and runbooks Act as a liaison between dev, infra/cloud teams, and Informatica Required Skills 3–6 years experience with Informatica IICS / IDMC (CDI, CAI) Strong knowledge of Secure Agents, deployments, scheduling, and monitoring Production support and platform administration experience Hands-on experience with Informatica Support and troubleshooting Good to Have Cloud exposure (AWS / Azure / GCP) Basic networking and connectivity knowledge IICS on prem- Cloud migration exposure ITSM tools (ServiceNow, JIRA) Role Expectation Independently handles admin and deployment activities Escalates issues with clear analysis Supports platform stability (not overall architecture ownership)

Responsibilities

Role Summary Hands-on IICS Administrator responsible for day-to-day platform administration, deployments, monitoring, incident coordination, and vendor interactions to ensure stable and high-performing IICS environments. Key Responsibilities Administer IICS orgs, environments (DEV/QA/UAT/PROD), and Secure Agents Manage users, roles, and access control Execute deployments across environments and support release cycles Monitor jobs and platform health Coordinate planned/unplanned downtime notifications Work with Informatica Support on tickets, logs, RCA, and resolutions Participate in platform discussions, upgrades, and performance tuning Maintain admin documentation, SOPs, and runbooks Act as a liaison between dev, infra/cloud teams, and Informatica Required Skills 3–6 years experience with Informatica IICS / IDMC (CDI, CAI) Strong knowledge of Secure Agents, deployments, scheduling, and monitoring Production support and platform administration experience Hands-on experience with Informatica Support and troubleshooting Good to Have Cloud exposure (AWS / Azure / GCP) Basic networking and connectivity knowledge IICS on prem- Cloud migration exposure ITSM tools (ServiceNow, JIRA) Role Expectation Independently handles admin and deployment activities Escalates issues with clear analysis Supports platform stability (not overall architecture ownership)
  • Salary : Rs. 10,00,000.0 - Rs. 18,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :IICS Administrator – Consultant

Job Description

Key Responsibilities • Design, implement, and manage AWS-based cloud infrastructure for scalable, secure, and highly available applications. • Develop and maintain Infrastructure as Code (IaC) using Terraform and/or AWS CloudFormation to ensure consistent, repeatable deployments. • Deploy, manage, and operate Kubernetes clusters (Amazon EKS preferred), including networking, autoscaling, security, and lifecycle management. • Build, optimize, and maintain CI/CD pipelines using GitLab CI/CD and/or AWS CodePipeline for automated build, test, and deployment workflows. • Administer and manage GitLab (self-hosted), including upgrades, backup/restore, performance tuning, user access, runners, and security hardening. • Administer and manage JFrog Artifactory (self-hosted), including repository configuration, access control, storage management, upgrades, and integrations with CI/CD pipelines. • Implement and promote SRE practices, including continuous monitoring, alerting, incident management, root cause analysis, and reliability improvements. • Perform troubleshooting and problem resolution across Linux-based environments, cloud infrastructure, and CI/CD platforms. • Collaborate with development, platform, and security teams to improve system reliability, deployment velocity, and operational excellence. ________________________________________ Required Skills & Technical Expertise Cloud Platforms • AWS (Primary) • GCP (Secondary / Nice to have) Infrastructure & Automation • Infrastructure as Code: Terraform • Configuration Management: Ansible Scripting & Programming • Go • Python • Shell scripting • Bazel Containers & Orchestration • Docker • Kubernetes (EKS preferred) • Helm • Kubernetes manifests CI/CD & DevOps Platforms • GitLab (Self-hosted administration & CI/CD pipelines) • JFrog Artifactory (Self-hosted administration) • AWS CodePipeline Monitoring & Observability • Prometheus • Grafana • OpenSearch ________________________________________ Nice to Have • Experience implementing SLOs, SLIs, and error budgets • Knowledge of cloud security and compliance best practices • Experience with multi-cloud or hybrid environments

Responsibilities

Key Responsibilities • Design, implement, and manage AWS-based cloud infrastructure for scalable, secure, and highly available applications. • Develop and maintain Infrastructure as Code (IaC) using Terraform and/or AWS CloudFormation to ensure consistent, repeatable deployments. • Deploy, manage, and operate Kubernetes clusters (Amazon EKS preferred), including networking, autoscaling, security, and lifecycle management. • Build, optimize, and maintain CI/CD pipelines using GitLab CI/CD and/or AWS CodePipeline for automated build, test, and deployment workflows. • Administer and manage GitLab (self-hosted), including upgrades, backup/restore, performance tuning, user access, runners, and security hardening. • Administer and manage JFrog Artifactory (self-hosted), including repository configuration, access control, storage management, upgrades, and integrations with CI/CD pipelines. • Implement and promote SRE practices, including continuous monitoring, alerting, incident management, root cause analysis, and reliability improvements. • Perform troubleshooting and problem resolution across Linux-based environments, cloud infrastructure, and CI/CD platforms. • Collaborate with development, platform, and security teams to improve system reliability, deployment velocity, and operational excellence. ________________________________________ Required Skills & Technical Expertise Cloud Platforms • AWS (Primary) • GCP (Secondary / Nice to have) Infrastructure & Automation • Infrastructure as Code: Terraform • Configuration Management: Ansible Scripting & Programming • Go • Python • Shell scripting • Bazel Containers & Orchestration • Docker • Kubernetes (EKS preferred) • Helm • Kubernetes manifests CI/CD & DevOps Platforms • GitLab (Self-hosted administration & CI/CD pipelines) • JFrog Artifactory (Self-hosted administration) • AWS CodePipeline Monitoring & Observability • Prometheus • Grafana • OpenSearch ________________________________________ Nice to Have • Experience implementing SLOs, SLIs, and error budgets • Knowledge of cloud security and compliance best practices • Experience with multi-cloud or hybrid environments
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Site Reliability Engineer (SRE) – Cloud Engineer

Job Description

Level- SA /Immediate joiners only CloudFlare Adv. DDoS , CloudFlare adv DDoS protect • Primary mandate skill required – CloudFlare WAF • Secondary mandate skill required –Security Operations • Open to look at CWR’s – Yes / No - Y • Flexible to hire in any location –Yes – Flexible Detailed Job Description – Cloudflare WAF Management • Design, implement, and manage Cloudflare WAF policies across enterprise applications • Tune managed rulesets, custom rules, and rate-limiting policies to minimize false positives • Monitor, analyze, and respond to WAF security events and incidents • Implement protection against OWASP Top 10 threats, DDoS attacks, bot abuse, and API threats • Coordinate with application teams to onboard new applications to Cloudflare securely CDN & Performance Optimization • Manage Cloudflare CDN configurations for optimal performance and availability • Configure caching strategies, page rules, and traffic routing policies • Troubleshoot latency, caching, and origin connectivity issues • Support global traffic management and high-availability architectures Cloudflare Workers & Edge Logic • Develop and maintain Cloudflare Workers for edge-based logic, request validation, and traffic manipulation • Implement Workers for security use cases such as header validation, token checks, redirects, and API protection • Collaborate with developers to deploy and manage Workers in CI/CD pipelines Security Operations & Governance • Integrate Cloudflare logs with SIEM/SOC tools for monitoring and alerting • Perform regular security reviews, audits, and compliance checks • Create documentation, runbooks, and operational procedures for WAF and edge security • Stay current with Cloudflare features, threat intelligence, and industry best practices Required Skills & Qualifications • Experience in web security, network security, or cloud security • Strong hands-on experience with Cloudflare WAF, CDN, and security products • Solid understanding of HTTP/HTTPS, TLS, DNS, and web application architecture • Experience managing WAF rules, rate limiting, bot management, and DDoS protection • Working knowledge of Cloudflare Workers (JavaScript) • Experience integrating security logs with tools (Splunk,S3.) • Familiarity with OWASP Top 10, API security, and Zero Trust concepts

Responsibilities

Level- SA /Immediate joiners only CloudFlare Adv. DDoS , CloudFlare adv DDoS protect • Primary mandate skill required – CloudFlare WAF • Secondary mandate skill required –Security Operations • Open to look at CWR’s – Yes / No - Y • Flexible to hire in any location –Yes – Flexible Detailed Job Description – Cloudflare WAF Management • Design, implement, and manage Cloudflare WAF policies across enterprise applications • Tune managed rulesets, custom rules, and rate-limiting policies to minimize false positives • Monitor, analyze, and respond to WAF security events and incidents • Implement protection against OWASP Top 10 threats, DDoS attacks, bot abuse, and API threats • Coordinate with application teams to onboard new applications to Cloudflare securely CDN & Performance Optimization • Manage Cloudflare CDN configurations for optimal performance and availability • Configure caching strategies, page rules, and traffic routing policies • Troubleshoot latency, caching, and origin connectivity issues • Support global traffic management and high-availability architectures Cloudflare Workers & Edge Logic • Develop and maintain Cloudflare Workers for edge-based logic, request validation, and traffic manipulation • Implement Workers for security use cases such as header validation, token checks, redirects, and API protection • Collaborate with developers to deploy and manage Workers in CI/CD pipelines Security Operations & Governance • Integrate Cloudflare logs with SIEM/SOC tools for monitoring and alerting • Perform regular security reviews, audits, and compliance checks • Create documentation, runbooks, and operational procedures for WAF and edge security • Stay current with Cloudflare features, threat intelligence, and industry best practices Required Skills & Qualifications • Experience in web security, network security, or cloud security • Strong hands-on experience with Cloudflare WAF, CDN, and security products • Solid understanding of HTTP/HTTPS, TLS, DNS, and web application architecture • Experience managing WAF rules, rate limiting, bot management, and DDoS protection • Working knowledge of Cloudflare Workers (JavaScript) • Experience integrating security logs with tools (Splunk,S3.) • Familiarity with OWASP Top 10, API security, and Zero Trust concepts
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :CloudFlare WAF

Job Description

Key Responsibilities • Design, develop, and maintain scalable web applications using the MERN stack. • Build reusable and efficient React components and implement responsive UI. • Develop robust RESTful APIs using Node.js and Express.js. • Design and manage MongoDB databases, schemas, and queries. • Integrate third-party APIs and services. • Implement authentication, authorization, and security best practices (JWT/OAuth). • Optimize applications for performance, scalability, and reliability. • Write clean, maintainable, and testable code. • Collaborate with cross-functional teams in Agile/Scrum environment. • Participate in code reviews, troubleshooting, and debugging. • Deploy applications on cloud platforms and manage CI/CD pipelines. ________________________________________ Mandatory Skills • Strong proficiency in JavaScript (ES6+) • Expertise in React.js, Redux/Context API, Hooks • Strong backend development using Node.js and Express.js • Hands-on experience with MongoDB and Mongoose • Experience building RESTful APIs • Knowledge of HTML5, CSS3, Bootstrap/Tailwind • Experience with Git version control • Understanding of JWT, OAuth, Authentication & Authorization • Experience with Postman, API testing • Familiarity with Docker and cloud platforms (AWS/Azure/GCP) • Understanding of CI/CD pipelines

Responsibilities

Key Responsibilities • Design, develop, and maintain scalable web applications using the MERN stack. • Build reusable and efficient React components and implement responsive UI. • Develop robust RESTful APIs using Node.js and Express.js. • Design and manage MongoDB databases, schemas, and queries. • Integrate third-party APIs and services. • Implement authentication, authorization, and security best practices (JWT/OAuth). • Optimize applications for performance, scalability, and reliability. • Write clean, maintainable, and testable code. • Collaborate with cross-functional teams in Agile/Scrum environment. • Participate in code reviews, troubleshooting, and debugging. • Deploy applications on cloud platforms and manage CI/CD pipelines. ________________________________________ Mandatory Skills • Strong proficiency in JavaScript (ES6+) • Expertise in React.js, Redux/Context API, Hooks • Strong backend development using Node.js and Express.js • Hands-on experience with MongoDB and Mongoose • Experience building RESTful APIs • Knowledge of HTML5, CSS3, Bootstrap/Tailwind • Experience with Git version control • Understanding of JWT, OAuth, Authentication & Authorization • Experience with Postman, API testing • Familiarity with Docker and cloud platforms (AWS/Azure/GCP) • Understanding of CI/CD pipelines
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : MERN Stack Developer (Z2)

Job Description

Consultant Software Engineer - Kafka & NIFI Admin with Ansible & Terraform - (250008TU) Missions Position Overview: We are seeking an experienced Senior Kafka & NiFi Administrator to design, deploy, manage, and optimize enterprise‑scale data streaming and data flow platforms. The ideal candidate will have deep expertise in Apache Kafka, Kafka Connect, Schema Registry, and Apache NiFi, along with strong automation, scripting, CI/CD, and cloud-native experience. This role ensures high availability, secure configurations, performance tuning, and operational excellence for real-time data pipelines. Key Responsibilities: 1. Kafka Administration a. Install, configure, and manage Apache Kafka clusters (on‑premise or cloud-native: Azure HDInsights). b. Manage Kafka ecosystem components: Kafka Connect, Schema Registry, Kafka Streams, ZooKeeper, etc. c. Perform cluster scaling, partition rebalancing, topic management, multi‑DC replication (MirrorMaker). d. Implement monitoring, alerting, and logging using tools such as Prometheus, Grafana, ELK/ECE. e. Ensure Kafka security (TLS encryption, SASL, RBAC, Kerberos, OAuth) f. Troubleshoot broker issues, performance bottlenecks, consumer lag, and message serialization errors. 2. NiFi Administration a. Install, manage, and maintain Apache NiFi, NiFi Registry b. Design, optimize, and troubleshoot complex NiFi data flows c. Manage NiFi cluster configuration, back-pressure settings, tuning, and provenance repository d. Integrate NiFi with Kafka, S3, HDFS, RDBMS, REST APIs, and cloud services e. Implement access control, SSL/TLS security, policies, and NiFi user/group management. 3. Automation & DevOps a. Develop automation using Ansible, Terraform or Bash b. Build CI/CD pipelines for Kafka, NiFi, and data flow deployments using Jenkins, Azure DevOps, GitHub Actions. c. Automate cluster provisioning & configuration using Ansible & Terraform d. Create reusable templates and automation for topic creation, ACL management, connector deployment, and flow lifecycle. 4. Operations & Support a. Provide L3 support for streaming platforms, including incident analysis and root cause identification. b. Establish and enforce best practices for data governance, data flow reliability, and operational standards c. Maintain detailed documentation for configurations, architectures, and runbooks d. Collaborate with platform engineering, data engineering, security, SRE, and cloud teams Key skills required: 1. Must have: a. 6–10+ years of experience on Kafka & NiFi Administration b. Strong Knowledge of Bigdata Architecture & Administrator's role c. Strong Knowledge of Bigdata Hadoop, Kafka Internals, NiFi flow design, performance tuning, various data formats, Schema registry, Kafka Connect, etc. d. Configuration and Performance Tuning of Kafka & NiFi clusters e. Application Deployment and Disaster Recovery f. Automation on Bigdata Infra with the help of Ansible & Terraform 2. Good to have: a. Java & Shell scripting b. Hadoop Administration c. Excellent communication skills Profile Must have: a. 6–10+ years of experience on Kafka & NiFi Administration b. Strong Knowledge of Bigdata Architecture & Administrator's role c. Strong Knowledge of Bigdata Hadoop, Kafka Internals, NiFi flow design, performance tuning, various data formats, Schema registry, Kafka Connect, etc. d. Configuration and Performance Tuning of Kafka & NiFi clusters e. Application Deployment and Disaster Recovery f. Automation on Bigdata Infra with the help of Ansible & Terraform 2. Good to have: a. Java & Shell scripting b. Hadoop Administration c. Excellent communication skills

Responsibilities

Consultant Software Engineer - Kafka & NIFI Admin with Ansible & Terraform - (250008TU) Missions Position Overview: We are seeking an experienced Senior Kafka & NiFi Administrator to design, deploy, manage, and optimize enterprise‑scale data streaming and data flow platforms. The ideal candidate will have deep expertise in Apache Kafka, Kafka Connect, Schema Registry, and Apache NiFi, along with strong automation, scripting, CI/CD, and cloud-native experience. This role ensures high availability, secure configurations, performance tuning, and operational excellence for real-time data pipelines. Key Responsibilities: 1. Kafka Administration a. Install, configure, and manage Apache Kafka clusters (on‑premise or cloud-native: Azure HDInsights). b. Manage Kafka ecosystem components: Kafka Connect, Schema Registry, Kafka Streams, ZooKeeper, etc. c. Perform cluster scaling, partition rebalancing, topic management, multi‑DC replication (MirrorMaker). d. Implement monitoring, alerting, and logging using tools such as Prometheus, Grafana, ELK/ECE. e. Ensure Kafka security (TLS encryption, SASL, RBAC, Kerberos, OAuth) f. Troubleshoot broker issues, performance bottlenecks, consumer lag, and message serialization errors. 2. NiFi Administration a. Install, manage, and maintain Apache NiFi, NiFi Registry b. Design, optimize, and troubleshoot complex NiFi data flows c. Manage NiFi cluster configuration, back-pressure settings, tuning, and provenance repository d. Integrate NiFi with Kafka, S3, HDFS, RDBMS, REST APIs, and cloud services e. Implement access control, SSL/TLS security, policies, and NiFi user/group management. 3. Automation & DevOps a. Develop automation using Ansible, Terraform or Bash b. Build CI/CD pipelines for Kafka, NiFi, and data flow deployments using Jenkins, Azure DevOps, GitHub Actions. c. Automate cluster provisioning & configuration using Ansible & Terraform d. Create reusable templates and automation for topic creation, ACL management, connector deployment, and flow lifecycle. 4. Operations & Support a. Provide L3 support for streaming platforms, including incident analysis and root cause identification. b. Establish and enforce best practices for data governance, data flow reliability, and operational standards c. Maintain detailed documentation for configurations, architectures, and runbooks d. Collaborate with platform engineering, data engineering, security, SRE, and cloud teams Key skills required: 1. Must have: a. 6–10+ years of experience on Kafka & NiFi Administration b. Strong Knowledge of Bigdata Architecture & Administrator's role c. Strong Knowledge of Bigdata Hadoop, Kafka Internals, NiFi flow design, performance tuning, various data formats, Schema registry, Kafka Connect, etc. d. Configuration and Performance Tuning of Kafka & NiFi clusters e. Application Deployment and Disaster Recovery f. Automation on Bigdata Infra with the help of Ansible & Terraform 2. Good to have: a. Java & Shell scripting b. Hadoop Administration c. Excellent communication skills Profile Must have: a. 6–10+ years of experience on Kafka & NiFi Administration b. Strong Knowledge of Bigdata Architecture & Administrator's role c. Strong Knowledge of Bigdata Hadoop, Kafka Internals, NiFi flow design, performance tuning, various data formats, Schema registry, Kafka Connect, etc. d. Configuration and Performance Tuning of Kafka & NiFi clusters e. Application Deployment and Disaster Recovery f. Automation on Bigdata Infra with the help of Ansible & Terraform 2. Good to have: a. Java & Shell scripting b. Hadoop Administration c. Excellent communication skills
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Consultant Software Engineer

Job Description

Job Description - Lead Software Engineer - DevOps (260006HM) Lead Software Engineer - DevOps - (260006HM) Missions DevOps engineer. Key Responsibilities CI/CD Pipeline Management: Design, implement, and maintain automated pipelines to accelerate development and software delivery. Collaboration and Automation: Work with development and IT teams to automate repetitive tasks and resolve production bottlenecks. Containerization & Security: Manage containers (e.g., Docker, Kubernetes/Openshift) and enforce security best practices.  Monitoring and Reliability: Implement monitoring solutions to ensure system uptime, performance, and stability. Profile Required Skills and Qualifications Experience: Proven experience in a DevOps or Site Reliability Engineering (SRE) role. Technical Skills:  Deep knowledge of CI/CD principles, Proficiency with Linux/Unix scripting (Python, Bash), CI/CD tools (Jenkins, GitLab CI), and container orchestration (Kubernetes,Openshift), Knowledge of microservice architecture, Spring framework, mobile development languages Tools: Experience with Git, Artifactory/xRay, SonarQube, monitoring tools (Prometheus, Grafana). Soft Skills: Strong problem-solving, analytical, and communication skills to foster collaboration between teams. 

Responsibilities

Job Description - Lead Software Engineer - DevOps (260006HM) Lead Software Engineer - DevOps - (260006HM) Missions DevOps engineer. Key Responsibilities CI/CD Pipeline Management: Design, implement, and maintain automated pipelines to accelerate development and software delivery. Collaboration and Automation: Work with development and IT teams to automate repetitive tasks and resolve production bottlenecks. Containerization & Security: Manage containers (e.g., Docker, Kubernetes/Openshift) and enforce security best practices.  Monitoring and Reliability: Implement monitoring solutions to ensure system uptime, performance, and stability. Profile Required Skills and Qualifications Experience: Proven experience in a DevOps or Site Reliability Engineering (SRE) role. Technical Skills:  Deep knowledge of CI/CD principles, Proficiency with Linux/Unix scripting (Python, Bash), CI/CD tools (Jenkins, GitLab CI), and container orchestration (Kubernetes,Openshift), Knowledge of microservice architecture, Spring framework, mobile development languages Tools: Experience with Git, Artifactory/xRay, SonarQube, monitoring tools (Prometheus, Grafana). Soft Skills: Strong problem-solving, analytical, and communication skills to foster collaboration between teams. 
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Lead Software Engineer - DevOps