We found 1403 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Detailed JD *(Roles and Responsibilities) Overall 5+ years of hands on experience in MF technologies • MF technologies ( JCL, Cobol etc) Mandatory skills* JCL, Cobol Desired skills* MF Development experience Domain* Banking

Responsibilities

Detailed JD *(Roles and Responsibilities) Overall 5+ years of hands on experience in MF technologies • MF technologies ( JCL, Cobol etc) Mandatory skills* JCL, Cobol Desired skills* MF Development experience Domain* Banking
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :JCL, Cobol

Job Description

.Net full stack Angular 19

Responsibilities

.Net full stack Angular 19
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :In house Consultant - .Net Full stack

Job Description

Data Center Discovery and Migration

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Data Center Discovery and Migration

Job Description

As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application features with business objectives, ensuring that the final product meets user needs and expectations. Roles & Responsibilities: - Proficiency in HTML, CSS, and JavaScript: Essential for building and styling the user interface. - Experience with JavaScript frameworks and libraries: Such as React/Angular/Nest.js.- AJAX: Well-versed with AJAX API calls (GET, POST, PUT, DELETE). It is essential to be familiar with the DOM.- Understanding of responsive design principles: Ensuring the website or application works well on different screen sizes.- Knowledge of UI/UX principles: Understanding how users interact with interfaces and designing for usability. - Experience with version control systems: Such as Git, Bitbucket, for managing code changes.- Problem-solving and analytical skills: Identifying and resolving issues efficiently.- Good to have experience in UI/FED for AEM projects. Professional & Technical Skills: - Must To Have Skills: Proficiency in User Interface Development.- Good To Have Skills: Experience with Hyper Text Markup Language (HTML), React Native.- Strong understanding of responsive design principles and user experience best practices.- Experience with front-end frameworks and libraries to enhance application functionality.- Familiarity with version control systems to manage code changes effectively. Additional Information: - The candidate should have minimum 5 years of experience in User Interface Development.- This position is based at our Bengaluru office.- A 15 years full time education is required.

Responsibilities

As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application features with business objectives, ensuring that the final product meets user needs and expectations. Roles & Responsibilities: - Proficiency in HTML, CSS, and JavaScript: Essential for building and styling the user interface. - Experience with JavaScript frameworks and libraries: Such as React/Angular/Nest.js.- AJAX: Well-versed with AJAX API calls (GET, POST, PUT, DELETE). It is essential to be familiar with the DOM.- Understanding of responsive design principles: Ensuring the website or application works well on different screen sizes.- Knowledge of UI/UX principles: Understanding how users interact with interfaces and designing for usability. - Experience with version control systems: Such as Git, Bitbucket, for managing code changes.- Problem-solving and analytical skills: Identifying and resolving issues efficiently.- Good to have experience in UI/FED for AEM projects. Professional & Technical Skills: - Must To Have Skills: Proficiency in User Interface Development.- Good To Have Skills: Experience with Hyper Text Markup Language (HTML), React Native.- Strong understanding of responsive design principles and user experience best practices.- Experience with front-end frameworks and libraries to enhance application functionality.- Familiarity with version control systems to manage code changes effectively. Additional Information: - The candidate should have minimum 5 years of experience in User Interface Development.- This position is based at our Bengaluru office.- A 15 years full time education is required.
  • Salary : Rs. 0.0 - Rs. 1,80,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Application Lead

Job Description

Description Data Engineer 1-3 years 9 lacs Job Summary: We are seeking a highly skilled and detail-oriented Data Engineer with expertise in data architecture, pipeline development, cloud platforms, and big data technologies. The ideal candidate will be responsible for designing, building, and maintaining scalable data infrastructure, ensuring efficient data flow across systems, and enabling advanced analytics and machine learning capabilities. ________________________________________ Key Responsibilities: • Good hands-on experience in Power BI. • Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data. • Build and optimize data lakes, data warehouses, and real-time streaming systems. • Collaborate with Data Scientists and Analysts to ensure data availability and quality for modeling and reporting. • Implement data governance, security, and compliance protocols. • Develop and maintain data APIs and services for internal and external consumption. • Work with cloud platforms (AWS, Azure, GCP) to deploy scalable data solutions. • Monitor and troubleshoot data workflows, ensuring high availability and performance. • Automate data validation, transformation, and integration processes. • Manage large-scale datasets using distributed computing frameworks like Spark and Hadoop. • Stay updated with emerging data engineering tools and best practices. ________________________________________ Technical Skills: Visualization tools: • Power BI Programming & Frameworks: • Languages: Python, SQL, Scala, Java • Frameworks & Tools: Apache Spark, Hadoop, Airflow, Kafka, Flink, NiFi, Beam • Libraries: Pandas, PySpark, Dask, FastAPI, SQLAlchemy Cloud & DevOps: • Platforms: AWS (Glue, Redshift, S3, EMR), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) • DevOps Tools: Docker, Kubernetes, Jenkins, Terraform, Git, GitHub Databases: • Relational: MySQL, PostgreSQL, SQL Server • NoSQL: MongoDB, Cassandra, DynamoDB, Redis • Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Data Architecture & Processing: • ETL/ELT design and implementation • Batch and real-time data processing • Data modeling (Star, Snowflake schemas) • Data quality and lineage tools (Great Expectations, dbt, Amundsen) Monitoring & Visualization: • Prometheus, Grafana, CloudWatch • Integration with BI tools like Power BI, Tableau ________________________________________ Qualifications: • Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field. • Proven experience in building and managing data pipelines and infrastructure. • Strong understanding of data architecture, distributed systems, and cloud-native technologies. • Excellent problem-solving and communication skills.

Responsibilities

Description Data Engineer 1-3 years 9 lacs Job Summary: We are seeking a highly skilled and detail-oriented Data Engineer with expertise in data architecture, pipeline development, cloud platforms, and big data technologies. The ideal candidate will be responsible for designing, building, and maintaining scalable data infrastructure, ensuring efficient data flow across systems, and enabling advanced analytics and machine learning capabilities. ________________________________________ Key Responsibilities: • Good hands-on experience in Power BI. • Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data. • Build and optimize data lakes, data warehouses, and real-time streaming systems. • Collaborate with Data Scientists and Analysts to ensure data availability and quality for modeling and reporting. • Implement data governance, security, and compliance protocols. • Develop and maintain data APIs and services for internal and external consumption. • Work with cloud platforms (AWS, Azure, GCP) to deploy scalable data solutions. • Monitor and troubleshoot data workflows, ensuring high availability and performance. • Automate data validation, transformation, and integration processes. • Manage large-scale datasets using distributed computing frameworks like Spark and Hadoop. • Stay updated with emerging data engineering tools and best practices. ________________________________________ Technical Skills: Visualization tools: • Power BI Programming & Frameworks: • Languages: Python, SQL, Scala, Java • Frameworks & Tools: Apache Spark, Hadoop, Airflow, Kafka, Flink, NiFi, Beam • Libraries: Pandas, PySpark, Dask, FastAPI, SQLAlchemy Cloud & DevOps: • Platforms: AWS (Glue, Redshift, S3, EMR), Azure (Data Factory, Synapse), GCP (BigQuery, Dataflow) • DevOps Tools: Docker, Kubernetes, Jenkins, Terraform, Git, GitHub Databases: • Relational: MySQL, PostgreSQL, SQL Server • NoSQL: MongoDB, Cassandra, DynamoDB, Redis • Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Data Architecture & Processing: • ETL/ELT design and implementation • Batch and real-time data processing • Data modeling (Star, Snowflake schemas) • Data quality and lineage tools (Great Expectations, dbt, Amundsen) Monitoring & Visualization: • Prometheus, Grafana, CloudWatch • Integration with BI tools like Power BI, Tableau ________________________________________ Qualifications: • Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field. • Proven experience in building and managing data pipelines and infrastructure. • Strong understanding of data architecture, distributed systems, and cloud-native technologies. • Excellent problem-solving and communication skills.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

Data Analytics Scrum Master 6-9 years 21 lacs Data Analytics Scrum Master • Facilitate agile ceremonies and drive scrum best practices for cross-functional data and analytics teams. • Coordinate backlog refinement, sprint planning, and delivery of data products, ensuring alignment with business goals. • Remove blockers, foster collaboration, and track progress with focus on timely, high-quality insights and solutions.

Responsibilities

Data Analytics Scrum Master 6-9 years 21 lacs Data Analytics Scrum Master • Facilitate agile ceremonies and drive scrum best practices for cross-functional data and analytics teams. • Coordinate backlog refinement, sprint planning, and delivery of data products, ensuring alignment with business goals. • Remove blockers, foster collaboration, and track progress with focus on timely, high-quality insights and solutions.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Analytics Scrum Master

Job Description

Data Scientist 1-3 years 9 lacs Job Summary: We are looking for a results-oriented Data Scientist with expertise in Statistics, Economics, Machine Learning, Deep Learning, Computer Vision, and Generative AI. The ideal candidate will have a proven track record of building and deploying predictive models, conducting statistical analysis, and applying cutting-edge AI techniques to solve real-world business challenges. ________________________________________ Key Responsibilities: Good to Have – Palantir Foundry Experience: Experience using Foundry Code Workbooks for developing and deploying machine learning models. Leveraging Foundry Ontology to access and interpret structured enterprise data for modeling. Familiarity with Foundry's AI/ML integration capabilities, including support for Python, Spark, and external ML libraries. Building interactive dashboards and analytical apps using Foundry's visualization tools. Collaborating within Foundry's shared workspace for reproducible and auditable data science workflows. Experience deploying models and integrating them into Foundry Operational Workflows for real-time decision support. • Develop models for regression, classification, clustering, and time series forecasting. • Perform hypothesis testing and statistical validation to support data-driven decisions. • Build and optimize deep learning models (ANN, CNN, RNN including LSTM, BERT). • Implement computer vision solutions using YOLOv3, SSD, U-Net, R-CNN, etc. • Apply Generative AI and LLMs (GPT-4, LLaMA2, Bard) for NLP and content generation. • Create interactive dashboards and applications using Streamlit or Flask. • Deploy models using AWS SageMaker, Azure, Docker, Kubernetes, and Jenkins. • Collaborate with cross-functional teams to integrate models into production. • Handle large datasets using SQL/NoSQL and PySpark. • Stay updated with the latest AI/ML research and contribute to innovation. ________________________________________ Technical Skills: Programming & Frameworks: • Languages: Python • Libraries/Frameworks: TensorFlow, PyTorch, Keras, Flask, Transformers, Langchain, PySpark, Caffe • Visualization & Data Tools: Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, Scipy, NLTK, Streamlit, OpenCV, Scikit-Image, Dlib, MXNet, Fasta ML & Statistical Techniques: • Regression (Linear/Logistic), Decision Trees, Random Forest, KNN, Naïve Bayes • Clustering (KMeans, Hierarchical), Time Series Forecasting • Hypothesis Testing, Statistical Inference Deep Learning & Computer Vision: • ANN, CNN, RNN (LSTM, BERT), VGGs, YOLOv3, SSD, HOGs, DCGAN, U-Net, R-CNN, NEAT, Inpainting Gen-AI / LLMs: • HuggingFace, GPT-4, Bard, LLaMA2, Pinecone, Palm, GenAI Studio, OpenAI fine-tuning Deployment & DevOps: • AWS (SageMaker), Azure, Docker, Kubernetes, Jenkins, Git, GitHub, API integration Databases & Tools: • MySQL, NoSQL • Jupyter Notebook, Google Colab, Visual Studio, Power BI ________________________________________ Qualifications: • Bachelor’s or Master’s in Statistics, Economics, Computer Science, Data Science, or related field. • Demonstrated experience in developing and deploying models in production. • Strong analytical and statistical skills. • Excellent communication and collaboration abilities.

Responsibilities

Data Scientist 1-3 years 9 lacs Job Summary: We are looking for a results-oriented Data Scientist with expertise in Statistics, Economics, Machine Learning, Deep Learning, Computer Vision, and Generative AI. The ideal candidate will have a proven track record of building and deploying predictive models, conducting statistical analysis, and applying cutting-edge AI techniques to solve real-world business challenges. ________________________________________ Key Responsibilities: Good to Have – Palantir Foundry Experience: Experience using Foundry Code Workbooks for developing and deploying machine learning models. Leveraging Foundry Ontology to access and interpret structured enterprise data for modeling. Familiarity with Foundry's AI/ML integration capabilities, including support for Python, Spark, and external ML libraries. Building interactive dashboards and analytical apps using Foundry's visualization tools. Collaborating within Foundry's shared workspace for reproducible and auditable data science workflows. Experience deploying models and integrating them into Foundry Operational Workflows for real-time decision support. • Develop models for regression, classification, clustering, and time series forecasting. • Perform hypothesis testing and statistical validation to support data-driven decisions. • Build and optimize deep learning models (ANN, CNN, RNN including LSTM, BERT). • Implement computer vision solutions using YOLOv3, SSD, U-Net, R-CNN, etc. • Apply Generative AI and LLMs (GPT-4, LLaMA2, Bard) for NLP and content generation. • Create interactive dashboards and applications using Streamlit or Flask. • Deploy models using AWS SageMaker, Azure, Docker, Kubernetes, and Jenkins. • Collaborate with cross-functional teams to integrate models into production. • Handle large datasets using SQL/NoSQL and PySpark. • Stay updated with the latest AI/ML research and contribute to innovation. ________________________________________ Technical Skills: Programming & Frameworks: • Languages: Python • Libraries/Frameworks: TensorFlow, PyTorch, Keras, Flask, Transformers, Langchain, PySpark, Caffe • Visualization & Data Tools: Pandas, NumPy, Seaborn, Matplotlib, Scikit-learn, Scipy, NLTK, Streamlit, OpenCV, Scikit-Image, Dlib, MXNet, Fasta ML & Statistical Techniques: • Regression (Linear/Logistic), Decision Trees, Random Forest, KNN, Naïve Bayes • Clustering (KMeans, Hierarchical), Time Series Forecasting • Hypothesis Testing, Statistical Inference Deep Learning & Computer Vision: • ANN, CNN, RNN (LSTM, BERT), VGGs, YOLOv3, SSD, HOGs, DCGAN, U-Net, R-CNN, NEAT, Inpainting Gen-AI / LLMs: • HuggingFace, GPT-4, Bard, LLaMA2, Pinecone, Palm, GenAI Studio, OpenAI fine-tuning Deployment & DevOps: • AWS (SageMaker), Azure, Docker, Kubernetes, Jenkins, Git, GitHub, API integration Databases & Tools: • MySQL, NoSQL • Jupyter Notebook, Google Colab, Visual Studio, Power BI ________________________________________ Qualifications: • Bachelor’s or Master’s in Statistics, Economics, Computer Science, Data Science, or related field. • Demonstrated experience in developing and deploying models in production. • Strong analytical and statistical skills. • Excellent communication and collaboration abilities.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Scientist

Job Description

Data Analytics Program Manager 6-9 years 21 lacs Description • Lead end-to-end data analytics programs, aligning strategy, delivery, and governance across multiple projects. • Manage cross-functional teams, budgets, and dependencies to deliver scalable data products and insights. • Partner with business stakeholders to ensure analytics solutions drive measurable impact and strategic decisions. • Gather, analyze, and document business requirements to translate them into actionable insights and solutions. • Collaborate with stakeholders and technical teams to design data-driven processes and reports.

Responsibilities

Data Analytics Program Manager 6-9 years 21 lacs Description • Lead end-to-end data analytics programs, aligning strategy, delivery, and governance across multiple projects. • Manage cross-functional teams, budgets, and dependencies to deliver scalable data products and insights. • Partner with business stakeholders to ensure analytics solutions drive measurable impact and strategic decisions. • Gather, analyze, and document business requirements to translate them into actionable insights and solutions. • Collaborate with stakeholders and technical teams to design data-driven processes and reports.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Analytics Program Manager

Job Description

Job Title: Reltio Integration Hub Developer Job Description: Design, develop, and maintain integration solutions between the Reltio MDM platform and other systems using Reltio Integration Hub and APIs to enable seamless, high-quality data flow and support business operations. Key Responsibilities: Develop, implement, and support integration solutions using Reltio Integration Hub Collaborate with stakeholders to gather requirements and translate into technical specifications Build, configure, and maintain batch/real-time data pipelines, connectors, mappings, and workflows Monitor, troubleshoot, and optimize integrations for performance, data accuracy, and reliability Document integration processes and technical configurations Support testing (integration, UAT) and provide ongoing enhancements.

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Reltio MDM Hub Developer