We found 1403 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

Work Location: Pune Skill Required: Digital : Spring Boot~Core Java Experience Range:- 4-6 Job Description – Core Java, Microservices Spring boot Essential Skills- Core Java, Microservices Spring boot

Responsibilities

Work Location: Pune Skill Required: Digital : Spring Boot~Core Java Experience Range:- 4-6 Job Description – Core Java, Microservices Spring boot Essential Skills- Core Java, Microservices Spring boot
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 2:Proven track record of delivering data integration, data warehousing soln 3: Strong SQL And Hands-on (No FLEX) 4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 5:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes 6:Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : Professional & Technical Skills: - 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education

Responsibilities

As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Roles & Responsibilities:- 1:Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 2:Proven track record of delivering data integration, data warehousing soln 3: Strong SQL And Hands-on (No FLEX) 4:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 5:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes 6:Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : Professional & Technical Skills: - 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education
  • Salary : Rs. 0.0 - Rs. 1,50,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Data Engineer

Job Description

Experience: 4-6 Years Work Location: Chennai, TN || Bangalore, KA || Hyderabad, TS Skill Required: Digital : Bigdata and Hadoop Ecosystems Digital : PySpark Job Description: "? Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing ? Work on Hadoop, Hive SQL?s, Spark, Bigdata Eco System Tools.? Experience in working with teams in a complex organization involving multiple reporting lines.? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. ? The candidate should have strong DevOps and Agile Development Framework knowledge.? Create Scala/Spark jobs for data transformation and aggregation? Experience with stream-processing systems like Storm, Spark-Streaming, Flink" Essential Skills: "? Working experience of Hadoop, Hive SQL? s, Spark, Bigdata Eco System Tools.? Should be able to tweak queries and work on performance enhancement. ? The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing. ? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects. ? The candidate should have strong DevOps and Agile Development Framework knowledge ? Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services.? Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.? Experience in working with teams in a complex organization involving multiple reporting lines? Solid understanding of object-oriented programming and HDFS concepts"

Responsibilities

Experience: 4-6 Years Work Location: Chennai, TN || Bangalore, KA || Hyderabad, TS Skill Required: Digital : Bigdata and Hadoop Ecosystems Digital : PySpark Job Description: "? Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing ? Work on Hadoop, Hive SQL?s, Spark, Bigdata Eco System Tools.? Experience in working with teams in a complex organization involving multiple reporting lines.? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. ? The candidate should have strong DevOps and Agile Development Framework knowledge.? Create Scala/Spark jobs for data transformation and aggregation? Experience with stream-processing systems like Storm, Spark-Streaming, Flink" Essential Skills: "? Working experience of Hadoop, Hive SQL? s, Spark, Bigdata Eco System Tools.? Should be able to tweak queries and work on performance enhancement. ? The candidate will be responsible for delivering code, setting up environment, connectivity, deploying the code in production after testing. ? The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. Occasionally, the candidate may have to be responsible as a primary contact and/or driver for small to medium size projects. ? The candidate should have strong DevOps and Agile Development Framework knowledge ? Preferable to have good technical knowledge on Cloud computing, AWS or Azure Cloud Services.? Strong conceptual and creative problem-solving skills, ability to work with considerable ambiguity, ability to learn new and complex concepts quickly.? Experience in working with teams in a complex organization involving multiple reporting lines? Solid understanding of object-oriented programming and HDFS concepts"
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding the team in implementing effective solutions. You will also engage with stakeholders to gather requirements and provide updates on project progress, ensuring alignment with business objectives and fostering a collaborative environment for innovation and efficiency. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP ABAP Development for HANA. - Strong understanding of application design principles and methodologies. - Experience with performance tuning and optimization of ABAP programs. - Familiarity with SAP HANA database concepts and data modeling. - Ability to troubleshoot and resolve technical issues efficiently. Additional Information: - The candidate should have minimum 5 years of experience in SAP ABAP Development for HANA. - This position is based at our Bengaluru office. - A 15 years full time education is required."

Responsibilities

As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding the team in implementing effective solutions. You will also engage with stakeholders to gather requirements and provide updates on project progress, ensuring alignment with business objectives and fostering a collaborative environment for innovation and efficiency. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP ABAP Development for HANA. - Strong understanding of application design principles and methodologies. - Experience with performance tuning and optimization of ABAP programs. - Familiarity with SAP HANA database concepts and data modeling. - Ability to troubleshoot and resolve technical issues efficiently. Additional Information: - The candidate should have minimum 5 years of experience in SAP ABAP Development for HANA. - This position is based at our Bengaluru office. - A 15 years full time education is required."
  • Salary : Rs. 0.0 - Rs. 2,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Application Lead

Job Description

Job Title: Developer Work Location: Chennai TN and Bangalore KA Skill Required: Digital : Python~Digital : Machine Learning~Generative AI Experience Range in Required Skills: 4-6 years Job Description: python,Gen AI,ML Essential Skills: python,Gen AI,ML

Responsibilities

Job Title: Developer Work Location: Chennai TN and Bangalore KA Skill Required: Digital : Python~Digital : Machine Learning~Generative AI Experience Range in Required Skills: 4-6 years Job Description: python,Gen AI,ML Essential Skills: python,Gen AI,ML
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

Experienced Senior Data Engineer with a strong background in PySpark, Python, and real-time data processing. The ideal candidate will have hands-on experience in building and maintaining scalable data applications, working with large-scale datasets, and implementing machine learning algorithms to Match, Merge and Enrich . Expertise in AWS DynamoDB, ElasticSearch, and match algorithms is essential. Key Responsibilities: • Design, develop, and maintain robust data pipelines using PySpark (Spark Streaming) and Python. • Handle large-scale structured and semi-structured data ingestion, transformation, and enrichment from diverse sources. • Implement and optimize match, merge, and enrich algorithms for high-volume data processing. • Apply machine learning libraries and custom-built algorithms for real-time analytics and decision-making. • Architect and manage end-to-end data systems, including data modeling, storage, and visualization. • Work with AWS services, particularly DynamoDB and ElasticSearch, to support scalable and efficient data operations. • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. • Ensure data quality, integrity, and security across all stages of the data lifecycle. • Excellent problem-solving skills and ability to guide and mentor the team on technical implementations and problem solving. ________________________________________ Required Skills & Qualifications: • Strong proficiency in PySpark, Spark Streaming, and Python. Minimum of 4+ years of experience in PySpark • Experience in real-time data processing and large-scale data systems. • Hands-on experience with AWS EMR (Dynamics connector), DynamoDB and ElasticSearch. • Experience implementing match algorithms and working with data enrichment workflows. • Solid understanding of data modeling, ETL processes, and data visualization tools. • Familiarity with machine learning frameworks and integrating ML into data pipelines. • Strong communication and documentation skills. • Experience with additional AWS services (e.g., S3, Lambda, Glue). • Enterprise Service Bus (ESB) reading, good to have MS Dynamics • Exposure to CI/CD pipelines and DevOps practices. Good to have Concourse. • Knowledge of data governance and compliance standards.

Responsibilities

Experienced Senior Data Engineer with a strong background in PySpark, Python, and real-time data processing. The ideal candidate will have hands-on experience in building and maintaining scalable data applications, working with large-scale datasets, and implementing machine learning algorithms to Match, Merge and Enrich . Expertise in AWS DynamoDB, ElasticSearch, and match algorithms is essential. Key Responsibilities: • Design, develop, and maintain robust data pipelines using PySpark (Spark Streaming) and Python. • Handle large-scale structured and semi-structured data ingestion, transformation, and enrichment from diverse sources. • Implement and optimize match, merge, and enrich algorithms for high-volume data processing. • Apply machine learning libraries and custom-built algorithms for real-time analytics and decision-making. • Architect and manage end-to-end data systems, including data modeling, storage, and visualization. • Work with AWS services, particularly DynamoDB and ElasticSearch, to support scalable and efficient data operations. • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. • Ensure data quality, integrity, and security across all stages of the data lifecycle. • Excellent problem-solving skills and ability to guide and mentor the team on technical implementations and problem solving. ________________________________________ Required Skills & Qualifications: • Strong proficiency in PySpark, Spark Streaming, and Python. Minimum of 4+ years of experience in PySpark • Experience in real-time data processing and large-scale data systems. • Hands-on experience with AWS EMR (Dynamics connector), DynamoDB and ElasticSearch. • Experience implementing match algorithms and working with data enrichment workflows. • Solid understanding of data modeling, ETL processes, and data visualization tools. • Familiarity with machine learning frameworks and integrating ML into data pipelines. • Strong communication and documentation skills. • Experience with additional AWS services (e.g., S3, Lambda, Glue). • Enterprise Service Bus (ESB) reading, good to have MS Dynamics • Exposure to CI/CD pipelines and DevOps practices. Good to have Concourse. • Knowledge of data governance and compliance standards.
  • Salary : Rs. 0.0 - Rs. 1,80,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Senior Data Engineer

Job Description

Network security

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Network security

Job Description

• Job Summary: Knowledge of Part Classification and attributes • Roles and Responsibilities Support part classification taxonomy data cleanup • Required Skills Good understanding of components used in HVAC systems, Commodity taxonomy and attributes for components • Desired Skills: Good understanding of components used in HVAC systems, Commodity taxonomy and attributes for components • Soft Skills : Good communication skills, MS Office tools, user level knowledge in PLM and ERP systems

Responsibilities

Knowledge of Part Classification and attributes • Roles and Responsibilities Support part classification taxonomy data cleanup • Required Skills Good understanding of components used in HVAC systems, Commodity taxonomy and attributes for components • Desired Skills: Good understanding of components used in HVAC systems, Commodity taxonomy and attributes for components • Soft Skills : Good communication skills, MS Office tools, user level knowledge in PLM and ERP systems
  • Salary : Rs. 47,500.0 - Rs. 48,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Parts - engineer

Job Description

Power BI and Apps III

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Power BI and Apps III