We found 22 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Skill: Data Governance Exp: 3+ yrs Location: NOIDA Timings: 2:00 PM to 10:30 PM Cab Facility provided: Yes Administrating & maintaining Informatica Data Governance Platforms - both on-premises version (EDC, AXON, IDQ) & cloud platform (CDGC service). Integrating Informatica Data governance platform with enterprise systems such as Snowflake, AWS Athena, AWS S3, IBM DB2, Power BI, MS SQL, Oracle etc. Managing metadata and implementing business lineage Implementing Business Glossary Association in Axon Developing and maintaining data quality assets such as rules, profiles, mappings, workflows, applications & scorecards Collaborating with data stewards, data owners, and business users to define and enforce data governance requirements. Maintaining documentation and ensuring compliance with internal data governance standards Creating and managing Power BI reports and semantic models Coordinating with support groups to get the issues resolved in a quick turnaround time Mandatory Bachelor’s degree in computer science or similar field or equivalent work experience 3+ years of experience working on a Data Governance Platform Understanding of Power BI reports and semantic model Expertise in on-premises Informatica Data Governance tools - Informatica Enterprise Data Catalog (EDC), Informatica Axon & Informatica Data Quality (IDQ) Experience with IDMC Data Governance Modules - Data Governance and Catalog, Data Profiling, Data Quality and Metadata Command Center Insight on data platforms such as Snowflake, AWS & Azure Experience in writing SQL queries & Python scripts Strong learning attitude Good written and verbal communication skills Experience of working in a team spread across multiple locations Preferable Knowledge of AWS services Knowledge of snowflake

Responsibilities

Skill: Data Governance Exp: 3+ yrs Location: NOIDA Timings: 2:00 PM to 10:30 PM Cab Facility provided: Yes Administrating & maintaining Informatica Data Governance Platforms - both on-premises version (EDC, AXON, IDQ) & cloud platform (CDGC service). Integrating Informatica Data governance platform with enterprise systems such as Snowflake, AWS Athena, AWS S3, IBM DB2, Power BI, MS SQL, Oracle etc. Managing metadata and implementing business lineage Implementing Business Glossary Association in Axon Developing and maintaining data quality assets such as rules, profiles, mappings, workflows, applications & scorecards Collaborating with data stewards, data owners, and business users to define and enforce data governance requirements. Maintaining documentation and ensuring compliance with internal data governance standards Creating and managing Power BI reports and semantic models Coordinating with support groups to get the issues resolved in a quick turnaround time Mandatory Bachelor’s degree in computer science or similar field or equivalent work experience 3+ years of experience working on a Data Governance Platform Understanding of Power BI reports and semantic model Expertise in on-premises Informatica Data Governance tools - Informatica Enterprise Data Catalog (EDC), Informatica Axon & Informatica Data Quality (IDQ) Experience with IDMC Data Governance Modules - Data Governance and Catalog, Data Profiling, Data Quality and Metadata Command Center Insight on data platforms such as Snowflake, AWS & Azure Experience in writing SQL queries & Python scripts Strong learning attitude Good written and verbal communication skills Experience of working in a team spread across multiple locations Preferable Knowledge of AWS services Knowledge of snowflake
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Governance

Job Description

Skill: Microsoft Fabric Location: NOIDA Timings: 2:00 PM to 10:30 PM Cab Facility provided: Yes · Responsible for understanding the requirements and perform data analysis. · Responsible for setup of Microsoft fabric and its components · Building secure, scalable solutions across the Microsoft Fabric platform. · Create and manage Lakehouse. · Implement Data Factory processes for data ingestion, scalable ETL and data integration. · Design, implement and manage comprehensive warehousing solutions for analytics using fabric · Creating and scheduling data pipelines using Azure data factory · Building robust data solutions using Microsoft data engineering tools like Notebook, Lakehouse and spark application. · Build and automate deployment pipelines using CICD tools for the release of fabric content from lower to higher environments. · Set and use Git as a repository and versioning of fabric components · Create and manage Power BI reports and semantic models · Write and optimize complex SQL queries to extract and analyze data, ensuring data processing and accurate reporting. · Work closely with customers, business analysts and technology & project team to understand business requirements, drive the analysis and design of quality technical solutions that are aligned with business and technology strategies and comply with the organization's architectural standards. · Understand and follow-up through change management procedures to implement project deliverables. · Coordinating with support groups to get the issues resolved in a quick turnaround time. · Mandatory · Bachelor’s degree in computer science or similar field or equivalent work experience. · 3+ years of experience working in Microsoft Fabric. · Expertise in working with OneLake, Lakehouse, Warehouse and Notebook · Strong understanding of Power BI reports and semantic model using Fabric · Proven record of building ETL and data solutions using Azure data factory. · Strong understanding of data warehousing concepts and ETL processes. · Hand on experience of building data warehouses in fabric. · Strong skills in Python and PySpark · Practical experience of implementing spark in fabric, scheduling spark jobs, writing spark SQL queries. · Experience of utilizing Data Activator for effective data asset management and analytics. · Ability to flex and adapt to different tools and technologies. · Strong learning attitude. · Good written and verbal communication skills. · Demonstrated experience of working in a team spread across multiple locations. · Preferable · Knowledge of AWS services · Knowledge of snowflake · Knowledge of real time analytics in fabric

Responsibilities

Skill: Microsoft Fabric Location: NOIDA Timings: 2:00 PM to 10:30 PM Cab Facility provided: Yes · Responsible for understanding the requirements and perform data analysis. · Responsible for setup of Microsoft fabric and its components · Building secure, scalable solutions across the Microsoft Fabric platform. · Create and manage Lakehouse. · Implement Data Factory processes for data ingestion, scalable ETL and data integration. · Design, implement and manage comprehensive warehousing solutions for analytics using fabric · Creating and scheduling data pipelines using Azure data factory · Building robust data solutions using Microsoft data engineering tools like Notebook, Lakehouse and spark application. · Build and automate deployment pipelines using CICD tools for the release of fabric content from lower to higher environments. · Set and use Git as a repository and versioning of fabric components · Create and manage Power BI reports and semantic models · Write and optimize complex SQL queries to extract and analyze data, ensuring data processing and accurate reporting. · Work closely with customers, business analysts and technology & project team to understand business requirements, drive the analysis and design of quality technical solutions that are aligned with business and technology strategies and comply with the organization's architectural standards. · Understand and follow-up through change management procedures to implement project deliverables. · Coordinating with support groups to get the issues resolved in a quick turnaround time. · Mandatory · Bachelor’s degree in computer science or similar field or equivalent work experience. · 3+ years of experience working in Microsoft Fabric. · Expertise in working with OneLake, Lakehouse, Warehouse and Notebook · Strong understanding of Power BI reports and semantic model using Fabric · Proven record of building ETL and data solutions using Azure data factory. · Strong understanding of data warehousing concepts and ETL processes. · Hand on experience of building data warehouses in fabric. · Strong skills in Python and PySpark · Practical experience of implementing spark in fabric, scheduling spark jobs, writing spark SQL queries. · Experience of utilizing Data Activator for effective data asset management and analytics. · Ability to flex and adapt to different tools and technologies. · Strong learning attitude. · Good written and verbal communication skills. · Demonstrated experience of working in a team spread across multiple locations. · Preferable · Knowledge of AWS services · Knowledge of snowflake · Knowledge of real time analytics in fabric
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Microsoft Fabric

Job Description

AWS Cloud Security 8 to 13 yrs SwissRE BLR/CHE/PUN/HYD/NOI Sanjeeva/Vaasanthi

Responsibilities

AWS Cloud Security 8 to 13 yrs SwissRE BLR/CHE/PUN/HYD/NOI Sanjeeva/Vaasanthi
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :AWS Cloud Security

Job Description

MS Defender 8 to 13 yrs AGI BLR/CHE/PUN/HYD/NOI Shiek

Responsibilities

MS Defender 8 to 13 yrs AGI BLR/CHE/PUN/HYD/NOI Shiek
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :MS DEFENDER

Job Description

Zscaler+Paloalto 4 to 6 yrs Takeda BLR/CHE/PUN/HYD/NOI Sanjeeva/Vaasanthi

Responsibilities

Zscaler+Paloalto 4 to 6 yrs Takeda BLR/CHE/PUN/HYD/NOI Sanjeeva/Vaasanthi
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Zscaler+Paloalto

Job Description

Check Point 8 to 13 yrs Biontech BLR/CHE/PUN/HYD/NOI Padma

Responsibilities

Check Point 8 to 13 yrs Biontech BLR/CHE/PUN/HYD/NOI Padma
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Checkpoint developer

Job Description

Req ID - 10640076 Job Title: Developer Work Location: ~HYDERABAD~HYDERABAD~NOIDA~ Skill Required: Digital : Microsoft Power Platform Experience Range: 4-6 Role Descriptions: Digital Microsoft Power Platform - Developer Essential Skills: Digital Microsoft Power Platform - Developer Skills: Digital : Microsoft Power Platform Experience Required: 4-6

Responsibilities

Req ID - 10640076 Job Title: Developer Work Location: ~HYDERABAD~HYDERABAD~NOIDA~ Skill Required: Digital : Microsoft Power Platform Experience Range: 4-6 Role Descriptions: Digital Microsoft Power Platform - Developer Essential Skills: Digital Microsoft Power Platform - Developer Skills: Digital : Microsoft Power Platform Experience Required: 4-6
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Microsoft Power Platform

Job Description

Forgerock Identity Management Role Descriptions: Essential Skills: ReactJS developer

Responsibilities

Forgerock Identity Management Role Descriptions: Essential Skills: ReactJS developer
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : ReactJS developer

Job Description

Req id- 10658751 Location-~NOIDA~NOIDA~ Role Descriptions: Core Responsibilities (L3)Expert Administration Configuration Deep-dive configuration of LogicMonitor| including creatingtuning DataSources| designing complex dashboards| setting up alerts| and implementing LogicModules.Complex Troubleshooting RCA Handling escalated| high-impact incidents (Sev-1Sev-2) that cannot be resolved by L1L2 teams| performing root cause analysis (RCA)| and ensuring system stability.Automation Scripting Utilizing Python| PowerShell| or Groovy to automate routine tasks| create custom monitoring scripts| and leverage the LogicMonitor REST API.Infrastructure Integration Integrating LogicMonitor with IT Service Management (ITSM) tools like ServiceNow or PagerDuty| and monitoring cloudon-prem hybrid infrastructure.Mentorship Documentation Mentoring L1L2 engineers| providing technical guidance| and creating documentation| playbooks| and runbooks.Operational Optimization Eliminating alert fatigue by tuning alerts and establishing maintenance windows. Mandatory Technical SkillsLogicMonitor Platform 37 years of experience in LogicMonitor implementation and administration.Scripting Proficiency in Python| Groovy| or PowerShell.API Proficiency Experience working with REST APIs.OSInfrastructure Strong knowledge of LinuxWindows system administration| networking (TCPIP| SNMP)| and cloud environments (AWSAzure).Monitoring Principles Deep understanding of IT infrastructure and application monitoring best practices. Qualifications RequirementsEducation Bachelors degree in Computer Science| Information Technology| or a related field.Experience 510 years of total IT experience| with at least 3 years specifically in monitoringnetwork operations.Soft Skills Strong communication skills for stakeholder management| ability to work independently| and problem-solving skills under pressure. Essential Skills: Core Responsibilities (L3)Expert Administration Configuration Deep-dive configuration of LogicMonitor| including creatingtuning DataSources| designing complex dashboards| setting up alerts| and implementing LogicModules.Complex Troubleshooting RCA Handling escalated| high-impact incidents (Sev-1Sev-2) that cannot be resolved by L1L2 teams| performing root cause analysis (RCA)| and ensuring system stability.Automation Scripting Utilizing Python| PowerShell| or Groovy to automate routine tasks| create custom monitoring scripts| and leverage the LogicMonitor REST API.Infrastructure Integration Integrating LogicMonitor with IT Service Management (ITSM) tools like ServiceNow or PagerDuty| and monitoring cloudon-prem hybrid infrastructure.Mentorship Documentation Mentoring L1L2 engineers| providing technical guidance| and creating documentation| playbooks| and runbooks.Operational Optimization Eliminating alert fatigue by tuning alerts and establishing maintenance windows. Mandatory Technical SkillsLogicMonitor Platform 37 years of experience in LogicMonitor implementation and administration.Scripting Proficiency in Python| Groovy| or PowerShell.API Proficiency Experience working with REST APIs.OSInfrastructure Strong knowledge of LinuxWindows system administration| networking (TCPIP| SNMP)| and cloud environments (AWSAzure).Monitoring Principles Deep understanding of IT infrastructure and application monitoring best practices. Qualifications RequirementsEducation Bachelors degree in Computer Science| Information Technology| or a related field.Experience 510 years of total IT experience| with at least 3 years specifically in monitoringnetwork operations.Soft Skills Strong communication skills for stakeholder management| ability to work independently| and problem-solving skills under pressure. Desirable Skills: Keyword: Skills: Digital : Python Experience Required: 6-8

Responsibilities

Req id- 10658751 Location-~NOIDA~NOIDA~ Role Descriptions: Core Responsibilities (L3)Expert Administration Configuration Deep-dive configuration of LogicMonitor| including creatingtuning DataSources| designing complex dashboards| setting up alerts| and implementing LogicModules.Complex Troubleshooting RCA Handling escalated| high-impact incidents (Sev-1Sev-2) that cannot be resolved by L1L2 teams| performing root cause analysis (RCA)| and ensuring system stability.Automation Scripting Utilizing Python| PowerShell| or Groovy to automate routine tasks| create custom monitoring scripts| and leverage the LogicMonitor REST API.Infrastructure Integration Integrating LogicMonitor with IT Service Management (ITSM) tools like ServiceNow or PagerDuty| and monitoring cloudon-prem hybrid infrastructure.Mentorship Documentation Mentoring L1L2 engineers| providing technical guidance| and creating documentation| playbooks| and runbooks.Operational Optimization Eliminating alert fatigue by tuning alerts and establishing maintenance windows. Mandatory Technical SkillsLogicMonitor Platform 37 years of experience in LogicMonitor implementation and administration.Scripting Proficiency in Python| Groovy| or PowerShell.API Proficiency Experience working with REST APIs.OSInfrastructure Strong knowledge of LinuxWindows system administration| networking (TCPIP| SNMP)| and cloud environments (AWSAzure).Monitoring Principles Deep understanding of IT infrastructure and application monitoring best practices. Qualifications RequirementsEducation Bachelors degree in Computer Science| Information Technology| or a related field.Experience 510 years of total IT experience| with at least 3 years specifically in monitoringnetwork operations.Soft Skills Strong communication skills for stakeholder management| ability to work independently| and problem-solving skills under pressure. Essential Skills: Core Responsibilities (L3)Expert Administration Configuration Deep-dive configuration of LogicMonitor| including creatingtuning DataSources| designing complex dashboards| setting up alerts| and implementing LogicModules.Complex Troubleshooting RCA Handling escalated| high-impact incidents (Sev-1Sev-2) that cannot be resolved by L1L2 teams| performing root cause analysis (RCA)| and ensuring system stability.Automation Scripting Utilizing Python| PowerShell| or Groovy to automate routine tasks| create custom monitoring scripts| and leverage the LogicMonitor REST API.Infrastructure Integration Integrating LogicMonitor with IT Service Management (ITSM) tools like ServiceNow or PagerDuty| and monitoring cloudon-prem hybrid infrastructure.Mentorship Documentation Mentoring L1L2 engineers| providing technical guidance| and creating documentation| playbooks| and runbooks.Operational Optimization Eliminating alert fatigue by tuning alerts and establishing maintenance windows. Mandatory Technical SkillsLogicMonitor Platform 37 years of experience in LogicMonitor implementation and administration.Scripting Proficiency in Python| Groovy| or PowerShell.API Proficiency Experience working with REST APIs.OSInfrastructure Strong knowledge of LinuxWindows system administration| networking (TCPIP| SNMP)| and cloud environments (AWSAzure).Monitoring Principles Deep understanding of IT infrastructure and application monitoring best practices. Qualifications RequirementsEducation Bachelors degree in Computer Science| Information Technology| or a related field.Experience 510 years of total IT experience| with at least 3 years specifically in monitoringnetwork operations.Soft Skills Strong communication skills for stakeholder management| ability to work independently| and problem-solving skills under pressure. Desirable Skills: Keyword: Skills: Digital : Python Experience Required: 6-8
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Core Responsibilities (L3)Expert Administration Configuration Deep-dive configuration of LogicMonitor| including creatingtuning DataSources| designing complex dashboards| setting up alerts| and implementing