We found 1083 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure timely delivery of application development tasks. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP FI CO Finance. - Strong understanding of financial processes and reporting. - Experience with application development methodologies and best practices. - Ability to analyze business requirements and translate them into technical specifications. - Familiarity with integration techniques and tools related to SAP applications. Additional Information: - The candidate should have minimum 5 years of experience in SAP FI CO Finance. - This position is based at our Pune office. - A 15 years full time education is required."

Responsibilities

As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure timely delivery of application development tasks. Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP FI CO Finance. - Strong understanding of financial processes and reporting. - Experience with application development methodologies and best practices. - Ability to analyze business requirements and translate them into technical specifications. - Familiarity with integration techniques and tools related to SAP applications. Additional Information: - The candidate should have minimum 5 years of experience in SAP FI CO Finance. - This position is based at our Pune office. - A 15 years full time education is required."
  • Salary : Rs. 0.0 - Rs. 1,80,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role : Application Developer

Job Description

Skill Required - Digital : Amazon Web Service(AWS) Cloud Computing Experience Range - 6 to 8 Job Description – Key Accountabilities: • Experience building data pipelines for various heterogenous data sources. • Identifying, designing and implementing scalable data delivery pipelines and automating manual processes • Building required infrastructure for optimal data extraction, transformation and loading of data using cloud technologies like AWS, Azure etc., • Develop end to end processes on the enterprise level for use by the clinical data configuration specialist to prepare data extraction and transformations of raw data quickly and efficiently from various sources at the study level • Coordinate with downstream users such as statistical programmers, SDTM programming, analytics, and clinical data programmers to ensure that outputs meet requirements of end users • Experience creating ELT and ETL to ingest data into data warehouse and data lakes • Experience creating reusable data pipelines for heterogenous data ingestions • Manage and maintain pipelines and troubleshoot data in data lake or warehouse • Provide visualization and analysis of data stored in data lake • Define and track KPIs and provide continuous improvement • Develop and maintain, tools, libraries, and reusable templates of data pipelines and standards for study level consumption by data configuration specialist • Collaborate with various vendors and cross functional teams to build and align on data transfer specification and ensure a streamlined process of data integration • Provide ad-hoc analysis and visualization as needed • Ensure accurate delivery of data format and data frequency with quality deliverables per specification • Participate in the development, maintenance and training rendered by standards and other functions on transfer specs and best practices used by business. • Collaborate with system architecture team in designing and developing data pipelines as per business needs • Network with key business stakeholders on refining and enhancing the integration of structured and non-structured data. • Provide expertise for structured and non-structured data ingestion • Develop organizational knowledge of key data sources, systems and be a valuable resource to people in the company on how to best integrate data to pursue company objectives. • Provides technical leadership on various aspects of clinical data flow including assisting with the definition, build, and validation of application program interfaces (APIs), data streams, data staging to various systems for data extraction and integration • Experience in creating data integrity and data quality checks for data ingestion • Coordinates with data base builders, clinical data configuration specialists and data management (DM) programmers ensuring accuracy of data integration per SOPs • Provide technical support / consultancy and end-user support, work with Information Technology (IT) in troubleshooting, reporting, and resolving system issues • Develop and deliver training programs to internal and external team, ensure timely communication of new and/or revised data transfer specs • Continuous Improvement/Continuous Development • Efficiently prepare and process large datasets for various end users for downstream consumption • Understand end to end requirements for stakeholders and contribute to process and conventions for clinical data ingestion and data transfer agreements • Adhere to SOPs for computer system validation and all GCP (Good Clinical Practice) regulations Ensure compliance with own Learning Curricula, corporate and/or GxP requirements • Assists with quality review of above activities performed by a vendor, as needed • Assess and enable clinical data visualization software in the data flows • Performs other duties as assigned within timelines • Performs clinical data engineering tasks according to applicable SOPs (standard operating procedures) and processes. Educational Qualification: • Bachelor's degree in computer science, statistics, biostatistics, mathematics, biology or other health related field or equivalent experience that provides the skills and knowledge necessary to perform the job. Experience: • BS with ~8+ years experience. Minimum of 5 years’ experience in data engineering, building data pipelines to manage heterogenous data ingestions or similar in data integration across multiple sources including collected data. • Experience with Python/R, SQL, NoSQL • Cloud experience (i.e. AWS, AZURE or GCP) • Experience with GitLab, GitHub • Experience with Jenkins, GitLab • Experience deploying data pipelines in the cloud • Experience with Apache Spark (databricks) • Experience setting up and working with data warehouse, data lakes (eg: snowflake, Amazon RedShift etc.,) • Experience setting up ELT and ETL • Experience with unstructured data processing and transformation • Experience developing and maintaining data pipelines for large amounts of data efficiently • Must understand database concepts. Knowledge of XML, JSON, APIs. • Demonstrated ability to lead projects and work groups. Strong project management skills. Proven ability to resolve problems independently and collaboratively. • Must be able to work in a fast-paced environment with demonstrated ability to juggle and prioritize multiple competing tasks and demands. • • Ability to work independently, take initiative and complete tasks to deadlines. Special Skills/Abilities: • Strong attention to detail, and organizational skills • Strong Project Management skills • Strong understating of end-to-end processes for data collection, extraction and analysis needs by end users • Strong ability to communicate with cross functional stakeholders • Strong ability to develop technical specifications based on communication from stakeholders • Quick learner and comfortable asking questions, learning new technologies and systems • Good knowledge of office software (Microsoft Office). • Experience creating custom functions Python/R • Cloud computing (AWS, Snowflakes, Databricks) • Ability to visualize large datasets • R shiny/ Python App experience a plus Preferable: • Experience developing R shiny and Python apps • Experience with Hadoop • Experience with Agile development methods

Responsibilities

Skill Required - Digital : Amazon Web Service(AWS) Cloud Computing Experience Range - 6 to 8 Job Description – Key Accountabilities: • Experience building data pipelines for various heterogenous data sources. • Identifying, designing and implementing scalable data delivery pipelines and automating manual processes • Building required infrastructure for optimal data extraction, transformation and loading of data using cloud technologies like AWS, Azure etc., • Develop end to end processes on the enterprise level for use by the clinical data configuration specialist to prepare data extraction and transformations of raw data quickly and efficiently from various sources at the study level • Coordinate with downstream users such as statistical programmers, SDTM programming, analytics, and clinical data programmers to ensure that outputs meet requirements of end users • Experience creating ELT and ETL to ingest data into data warehouse and data lakes • Experience creating reusable data pipelines for heterogenous data ingestions • Manage and maintain pipelines and troubleshoot data in data lake or warehouse • Provide visualization and analysis of data stored in data lake • Define and track KPIs and provide continuous improvement • Develop and maintain, tools, libraries, and reusable templates of data pipelines and standards for study level consumption by data configuration specialist • Collaborate with various vendors and cross functional teams to build and align on data transfer specification and ensure a streamlined process of data integration • Provide ad-hoc analysis and visualization as needed • Ensure accurate delivery of data format and data frequency with quality deliverables per specification • Participate in the development, maintenance and training rendered by standards and other functions on transfer specs and best practices used by business. • Collaborate with system architecture team in designing and developing data pipelines as per business needs • Network with key business stakeholders on refining and enhancing the integration of structured and non-structured data. • Provide expertise for structured and non-structured data ingestion • Develop organizational knowledge of key data sources, systems and be a valuable resource to people in the company on how to best integrate data to pursue company objectives. • Provides technical leadership on various aspects of clinical data flow including assisting with the definition, build, and validation of application program interfaces (APIs), data streams, data staging to various systems for data extraction and integration • Experience in creating data integrity and data quality checks for data ingestion • Coordinates with data base builders, clinical data configuration specialists and data management (DM) programmers ensuring accuracy of data integration per SOPs • Provide technical support / consultancy and end-user support, work with Information Technology (IT) in troubleshooting, reporting, and resolving system issues • Develop and deliver training programs to internal and external team, ensure timely communication of new and/or revised data transfer specs • Continuous Improvement/Continuous Development • Efficiently prepare and process large datasets for various end users for downstream consumption • Understand end to end requirements for stakeholders and contribute to process and conventions for clinical data ingestion and data transfer agreements • Adhere to SOPs for computer system validation and all GCP (Good Clinical Practice) regulations Ensure compliance with own Learning Curricula, corporate and/or GxP requirements • Assists with quality review of above activities performed by a vendor, as needed • Assess and enable clinical data visualization software in the data flows • Performs other duties as assigned within timelines • Performs clinical data engineering tasks according to applicable SOPs (standard operating procedures) and processes. Educational Qualification: • Bachelor's degree in computer science, statistics, biostatistics, mathematics, biology or other health related field or equivalent experience that provides the skills and knowledge necessary to perform the job. Experience: • BS with ~8+ years experience. Minimum of 5 years’ experience in data engineering, building data pipelines to manage heterogenous data ingestions or similar in data integration across multiple sources including collected data. • Experience with Python/R, SQL, NoSQL • Cloud experience (i.e. AWS, AZURE or GCP) • Experience with GitLab, GitHub • Experience with Jenkins, GitLab • Experience deploying data pipelines in the cloud • Experience with Apache Spark (databricks) • Experience setting up and working with data warehouse, data lakes (eg: snowflake, Amazon RedShift etc.,) • Experience setting up ELT and ETL • Experience with unstructured data processing and transformation • Experience developing and maintaining data pipelines for large amounts of data efficiently • Must understand database concepts. Knowledge of XML, JSON, APIs. • Demonstrated ability to lead projects and work groups. Strong project management skills. Proven ability to resolve problems independently and collaboratively. • Must be able to work in a fast-paced environment with demonstrated ability to juggle and prioritize multiple competing tasks and demands. • • Ability to work independently, take initiative and complete tasks to deadlines. Special Skills/Abilities: • Strong attention to detail, and organizational skills • Strong Project Management skills • Strong understating of end-to-end processes for data collection, extraction and analysis needs by end users • Strong ability to communicate with cross functional stakeholders • Strong ability to develop technical specifications based on communication from stakeholders • Quick learner and comfortable asking questions, learning new technologies and systems • Good knowledge of office software (Microsoft Office). • Experience creating custom functions Python/R • Cloud computing (AWS, Snowflakes, Databricks) • Ability to visualize large datasets • R shiny/ Python App experience a plus Preferable: • Experience developing R shiny and Python apps • Experience with Hadoop • Experience with Agile development methods
  • Salary : Rs. 90,000.0 - Rs. 1,65,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Engineer

Job Description

Job Description: 1.Good understanding of data warehousing concepts with hands-on experience working across multiple databases, including Oracle, PostgreSQL, and MySQL. 2.Extensive experience in Databricks, with a focus on building scalable data solutions. 3.Proven ability to design, develop, and maintain robust ETLELT pipelines using Databricks to extract, transform, and load data from diverse sources into target systems. 4.Strong understanding of data integration from both structured and unstructured sources such as relational databases, flat files, APIs, and cloud storage. 5.Skilled in implementing data validation, cleansing, and reconciliation processes to ensure high data quality and integrity. 6.Hands-on experience working with the AWS cloud platform, leveraging services for data processing and storage. 7.Familiar with Agile process and DevOps practices, including Jira, Confluence, GitHub, and CICD pipelines. 8.Excellent communication skills and a strong team player with a collaborative mindset.Tools Technologies Databricks , AWS Glue , Redshift , Oracle DB , Python, Jira Confluence

Responsibilities

  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Analyst

Job Description

PySpark, EMR, AWS

Responsibilities

PySpark, EMR, AWS
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :PySpark, EMR, AWS

Job Description

Gather and analyze business requirements through workshops, interviews, and process mapping • Translate requirements into well-structured epics, user stories, and acceptance criteria • Maintain and refine the backlog in close collaboration with Product Owner and Technical Lead • Document solution designs, data flows, and technical specifications in collaboration with developers • Produce clear user guides, release notes, FAQs, and training materials for main deliverables, Communicate effectively with stakeholders. • ServiceNow /ITSM / ITIL background will benefit. This is a functional role.

Responsibilities

Gather and analyze business requirements through workshops, interviews, and process mapping • Translate requirements into well-structured epics, user stories, and acceptance criteria • Maintain and refine the backlog in close collaboration with Product Owner and Technical Lead • Document solution designs, data flows, and technical specifications in collaboration with developers • Produce clear user guides, release notes, FAQs, and training materials for main deliverables, Communicate effectively with stakeholders. • ServiceNow /ITSM / ITIL background will benefit. This is a functional role.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :ServiceNow Technical Writer

Job Description

IIB developer

Responsibilities

IIB developer
  • Salary : Rs. 10.0 - Rs. 12.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :IIB Developer

Job Description

.Net Full stack developer

Responsibilities

.Net Full stack developer
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :.Net Full stack developer

Job Description

ob Description: Primary Skill: Java spring boot microservices with GCP, Kafka - Design, develop, and maintain scalable full-stack applications using Java (Spring Boot) and react. - Build and integrate event-driven systems using Apache Kafka within a GCP environment. - Develop RESTful APIs and work with microservices architecture. - Collaborate with cross-functional teams (DevOps, Product, QA) to deliver high-quality solutions. - Ensure system responsiveness, performance, and scalability. - Participate in code reviews, testing, and debugging. - Leverage GCP services (e.g., PubSub, Cloud Functions, BigQuery) to optimize application performance. - Write clean, maintainable, and testable code

Responsibilities

ob Description: Primary Skill: Java spring boot microservices with GCP, Kafka - Design, develop, and maintain scalable full-stack applications using Java (Spring Boot) and react. - Build and integrate event-driven systems using Apache Kafka within a GCP environment. - Develop RESTful APIs and work with microservices architecture. - Collaborate with cross-functional teams (DevOps, Product, QA) to deliver high-quality solutions. - Ensure system responsiveness, performance, and scalability. - Participate in code reviews, testing, and debugging. - Leverage GCP services (e.g., PubSub, Cloud Functions, BigQuery) to optimize application performance. - Write clean, maintainable, and testable code
  • Salary : Rs. 70,000.0 - Rs. 1,30,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Developer

Job Description

Please find below the Job Description (JD). Kindly share relevant profiles along with details of your footprint and presence in Bangkok, including any local support or delivery capabilities. Looking forward to your response. Contracting period: from 6 to 12 months (to start with) [Contractor - Big Data Engineer] – Fraud Detection Solutions Project context Shaping the future of travel has always been important to us at Amadeus. Today, with technology getting smarter by the minute, that future is more exciting than ever. Amadeus works at the heart of the global travel industry and provides the technology which keeps the travel sector moving - from initial search to making a booking, from pricing to ticketing, from managing reservations to managing check-in and departure processes. Our products and solutions help to improve the business performance of our customers, travel agencies, corporations, airlines, ground handlers, hotels, railways, car rental companies, airports, cruise lines and ferry operators. Missions at Amadeus offer the opportunity to learn and grow in an exciting and multicultural environment. Goals and deliverables • Analyse specifications _ Define user requirements for the development of new (or upgrade of existing) software solutions. _ Identify the right data sources in term of functional content, frequency, and quality. • Design systems and code _ Design technical solutions and perform feasibility studies. _ Propose viable technical solutions to Product Management and/or users for validation. _ Develop software according to Amadeus standards, considering security, resiliency, resource efficiency _ Implement data acquisition, data modelling, cleansing, aggregation and other treatments pipelines. _ Design data models that provide the best compromise between complexity, response time and running costs _ Perform technology watch on (Big data technologies, covering cloud solutions, open-source main projects etc… _ Master the main technical frameworks (eg Spark for distributed processing), languages (Scala or more generally functional programming) etc… • Test and maintain the software _ Conduct unit, package and performance tests and ensure a level of quality in line with the Amadeus guidelines. _ Participate in the validation phase of the product cycle, fine-tuning when necessary to finalize the product. _ Support the customer by debugging existing solutions in collaboration with Product Manager or Product Definition Analyst • Document your work _ Produce software documentation necessary for the application and issue it to the requesting departments.   • Ideal candidate: ·      Master's degree in IT Engineering ·      Big data engineering hands on 2-3 years of experience ·      Programming/scripting languages: Spark, Scala, Java, Shell, Python ·      Knowledge on Elasticsearch and SQL databases ·      Hands-on on: Databricks, Hadoop HDFS ·      Mathematics, statistics and machine learning are a good asset for data engineering ·      Strong Interests in cybersecurity and Generative AI ·      Ability to work in multi-cultural environments ·      Rigorous problem-solving skills ·      Creativity, innovation ·      Cloud Technologies: Azure preferred ·      Team player, ability to work independently, drive initiatives or teach others

Responsibilities

Project context Shaping the future of travel has always been important to us at Amadeus. Today, with technology getting smarter by the minute, that future is more exciting than ever. Amadeus works at the heart of the global travel industry and provides the technology which keeps the travel sector moving - from initial search to making a booking, from pricing to ticketing, from managing reservations to managing check-in and departure processes. Our products and solutions help to improve the business performance of our customers, travel agencies, corporations, airlines, ground handlers, hotels, railways, car rental companies, airports, cruise lines and ferry operators. Missions at Amadeus offer the opportunity to learn and grow in an exciting and multicultural environment. Goals and deliverables • Analyse specifications _ Define user requirements for the development of new (or upgrade of existing) software solutions. _ Identify the right data sources in term of functional content, frequency, and quality. • Design systems and code _ Design technical solutions and perform feasibility studies. _ Propose viable technical solutions to Product Management and/or users for validation. _ Develop software according to Amadeus standards, considering security, resiliency, resource efficiency _ Implement data acquisition, data modelling, cleansing, aggregation and other treatments pipelines. _ Design data models that provide the best compromise between complexity, response time and running costs _ Perform technology watch on (Big data technologies, covering cloud solutions, open-source main projects etc… _ Master the main technical frameworks (eg Spark for distributed processing), languages (Scala or more generally functional programming) etc… • Test and maintain the software _ Conduct unit, package and performance tests and ensure a level of quality in line with the Amadeus guidelines. _ Participate in the validation phase of the product cycle, fine-tuning when necessary to finalize the product. _ Support the customer by debugging existing solutions in collaboration with Product Manager or Product Definition Analyst • Document your work _ Produce software documentation necessary for the application and issue it to the requesting departments.   • Ideal candidate: ·      Master's degree in IT Engineering ·      Big data engineering hands on 2-3 years of experience ·      Programming/scripting languages: Spark, Scala, Java, Shell, Python ·      Knowledge on Elasticsearch and SQL databases ·      Hands-on on: Databricks, Hadoop HDFS ·      Mathematics, statistics and machine learning are a good asset for data engineering ·      Strong Interests in cybersecurity and Generative AI ·      Ability to work in multi-cultural environments ·      Rigorous problem-solving skills ·      Creativity, innovation ·      Cloud Technologies: Azure preferred ·      Team player, ability to work independently, drive initiatives or teach others
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Big Data Engineer - Fraud Detection Solutions