We found 33 jobs matching to your search

Advance Search

Skills

Locations

Experience

Job Description

The role should be able to handle one or more programs simultaneously, with the following scope of work with the underlined skill-sets. - Role should be proficient in Linux systems administration and should know the key commands, files and workflows on how to install, configure and maintain a basic Linux Operating system. Any experience in SUSE OS is an added advantage. - Role should be able to automate a given workflow using Python 3 and/or BASH Shell scripting and should be able to maintain multiple such scripts as a deliverable and sustain the same across product life cycles. - Role should have the basic debug skill to understand where to look for logs for OS related audits and failures and should be able to boil down to a root cause related to the OS Kernel/Service related issue. - Should work hand in glove with the product DevOps team to integrate the native scripts and codes deployment workflows as part of the product. Knowledge on Gitlab CI, JFROG Artifactory and other DevOps infra will be an added advantage. - Should know how to setup/deploy and configure basic monitoring and audit modules like Prometheus and Grafana on a Linux native OS or on a container orchestration like Kubernetes, to monitor the aspects of the native Linux OS and compute resource aspects. Must have skills: - Linux OS administration and debug skills (Good if we have an expert on SUSE OS). - Moderate scripting skills on Python and BASH. - Moderate knowledge on deployment of components like K3s or K8s container service, Prometheus and Grafana monitoring stack.

Responsibilities

Skills that pose a definite advantage: - Worked on SUSE OS especially SUSE Micro - Worked on deployment of Linux OS on small edge compute systems and IoT devices. - Worked on Edge/IoT deployment techniques like Kairos and CanvOS. - Know how to deploy and remotely manage Linux Virtual Machines on virtual IaaS like VMware and KVM.
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Linux System Engineer

Job Description

C, C++, Python, UART, SPI, i2C, TCP/IP, Wifi and Zigbee, Windows, Linux, (React jS, Node - Optional)

Responsibilities

C, C++, Python, UART, SPI, i2C, TCP/IP, Wifi and Zigbee, Windows, Linux, (React jS, Node - Optional)
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :C, C++, Python Devloper

Job Description

Job Description: We are seeking a highly skilled Development Lead with expertise in Generative AI to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge AI models and systems for our clients. Your primary focus will be on driving innovation and leveraging generative AI techniques to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of AI technology. Responsibilities: Develop and implement creative experiences, campaigns, apps, and digital products, leveraging generative AI technologies at their core. Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Services, Data Pre-processing, Cloud AI PaaS Solutions, Base Foundation Models, Fine Tuned models, working with a variety of LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Generative AI Applications and Solutions. Minimum 2 years hands-on experience in working with Large Language Models Solid understanding of Transformer Models and how they work. Solid understanding of Diffusion Models and how they work. Hands-on Experience with building production solutions using a variety of different. LLMs and models - including GPT-4, Gemini, Claude, Llama, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, including Azure OpenAI, AWS Bedrock, and/or GCP Vertex AI. Solid Hands-on Experience working with Enterprise RAG technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with Agent-driven Gen-AI architectures and solutions. Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience in data preprocessing, and post-processing model / results evaluation. Solid Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning LLM models at scale. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Ability to lead design and development for Full-Stack Gen-AI Apps and Products, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have: Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models.

Responsibilities

Job Description: We are seeking a highly skilled Development Lead with expertise in Generative AI to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge AI models and systems for our clients. Your primary focus will be on driving innovation and leveraging generative AI techniques to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of AI technology. Responsibilities: Develop and implement creative experiences, campaigns, apps, and digital products, leveraging generative AI technologies at their core. Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Services, Data Pre-processing, Cloud AI PaaS Solutions, Base Foundation Models, Fine Tuned models, working with a variety of LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Generative AI Applications and Solutions. Minimum 2 years hands-on experience in working with Large Language Models Solid understanding of Transformer Models and how they work. Solid understanding of Diffusion Models and how they work. Hands-on Experience with building production solutions using a variety of different. LLMs and models - including GPT-4, Gemini, Claude, Llama, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, including Azure OpenAI, AWS Bedrock, and/or GCP Vertex AI. Solid Hands-on Experience working with Enterprise RAG technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with Agent-driven Gen-AI architectures and solutions. Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience in data preprocessing, and post-processing model / results evaluation. Solid Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning LLM models at scale. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Ability to lead design and development for Full-Stack Gen-AI Apps and Products, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have: Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models.
  • Salary : Rs. 25,000.0 - Rs. 3,50,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Technical Lead - Gen AI

Job Description

Job Profile – Data Science Specialist Primary - Python, Machine Learning Secondary - Hadoop Overall purpose of role • As Data Engineer, you’ll be responsible for creating and optimizing data and data pipeline for Machine learning projects, writing PySpark code for ETL operation on source data, data wrangling, perform data profiling & aggregation for preparing model ready data. • The Data Engineer will support our designers, data analysts and data scientists on different project initiatives and will ensure optimal project delivery. • You must be self-directed and comfortable supporting the development requirements which includes creation of an ML framework, component and creation of data pipeline in PySpark, Hive, MLiB for end to end (E2E) product lifecycle roadmaps. • You will also act as a Data Enabler for ML use cases as well as any tasks that involve generation of insights. • The individual would need to gain a good data understanding of all the source systems so as to support any downstream Use-Case/MI/Reporting requirements. • Experience of Banking/financial industries • Good understanding of Data Pipeline building on Pyspark & Hadoop echo system • Good understanding of various statistical analysis techniques • Understanding of MI, Machine learning technologies and solution • Relational and NoSQL databases. • Strong reporting and communication skills • Ability to work on a number of tasks on a variety of different projects at the same time. • Ability to build effective internal relationships • Ability to work with an existing body of processes, with an endeavour to improve them wherever feasible. • Must have experience of working in an IT project environment and an understanding of IT and business strategies and governance

Responsibilities

Job Profile – Data Science Specialist Primary - Python, Machine Learning Secondary - Hadoop Overall purpose of role • As Data Engineer, you’ll be responsible for creating and optimizing data and data pipeline for Machine learning projects, writing PySpark code for ETL operation on source data, data wrangling, perform data profiling & aggregation for preparing model ready data. • The Data Engineer will support our designers, data analysts and data scientists on different project initiatives and will ensure optimal project delivery. • You must be self-directed and comfortable supporting the development requirements which includes creation of an ML framework, component and creation of data pipeline in PySpark, Hive, MLiB for end to end (E2E) product lifecycle roadmaps. • You will also act as a Data Enabler for ML use cases as well as any tasks that involve generation of insights. • The individual would need to gain a good data understanding of all the source systems so as to support any downstream Use-Case/MI/Reporting requirements. • Experience of Banking/financial industries • Good understanding of Data Pipeline building on Pyspark & Hadoop echo system • Good understanding of various statistical analysis techniques • Understanding of MI, Machine learning technologies and solution • Relational and NoSQL databases. • Strong reporting and communication skills • Ability to work on a number of tasks on a variety of different projects at the same time. • Ability to build effective internal relationships • Ability to work with an existing body of processes, with an endeavour to improve them wherever feasible. • Must have experience of working in an IT project environment and an understanding of IT and business strategies and governance
  • Salary : Rs. 0.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Python

Job Description

The Sr. BI Developer will be a part of the global team that designs, develops, and maintains BI solutions leveraging the modern BI tools and data engineering technologies for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch Technology Center (KTC) is being developed in India to extend its IT operations, as well as act as a hub for innovation in the IT function. As KGSI rapidly scales up its operations in India, it’s employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground and will play a critical part in helping build out the KGSI (Koch Global Services India) over the next several years. Working closely with global colleagues would provide significant global exposure to the employees. We are seeking a Sr. BI Developer who is self-motivated and can drive projects with minimal or no directions. The ideal candidate is someone who has the natural zeal to learn, experiment, innovate, believes in the art of possible, someone who is passionate about BI and keep himself up to date of how BI products and industry is evolving. The incumbent must be well versed with the entire BI development lifecycle, must have strong hands-on experience with SQL, data modelling, ETL, data visualization, BI performance troubleshooting and best practices and has worked with different features of Power BI and capabilities i.e. data flows, deployment pipelines, apps, Power BI REST APIs, XMLA endpoint etc

Responsibilities

· Designing, developing, new BI solutions and maintaining existing BI solutions. · Should be able to develop and reengineer complex BI solutions. · Troubleshoot, enhance, and write complex SQL queries, stored procedures, CTEs. · As part of the BI COE drive the effort and bring in ideas and strategies aiming to promote data literacy and self-service BI in the organization. · Should be a proponent of the art of possible by do, show, and share. · Be very well versed with the features of Power BI and knows which feature is applicable for which use case. · Setup BI best practices and standards for the organization. · Strategize and provide recommendations to the leadership to modernize BI for the organization by proposing ideas, challenging status quo. · Engage with businesses to help them in solutioning of BI. · Transforming legacy BI solutions to modern self-service BI solution · Providing technical guidance, training, and support to end- users. · Provides training and support to the Power BI user community to drive BI adoption and create citizen developers. · Review, experiment, and show existing and new features to the Power BI community. · Work closely with different teams like O365, Power BI admin and Data gateway admins to ensure a holistically owned BI organization wide.
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Sr. Power BI developer

Job Description

The Data Engineer will be a part of an international team that designs, develops and delivers new applications for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch Global Services (KGS) is being developed in India to as a Shared Service Operations, as well as act as a hub for innovation across functions. As KGS rapidly scales up its operations in India, it’s employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch Global Services (KGS) over the next several years. Working closely with global colleagues would provide significant international exposure to the employees. We are seeking a Data Engineer expert to join KGS Analytics capability. We love passionate, forward thinking individuals who are driven to innovate. You will have the opportunity to engage with Business Analysts, Analytics Consultants, and internal customers to implement ideas, optimize existing dashboards, and create Visualization products using powerful, contemporary tools. This opportunity engages diverse types of business applications and data sets at a rapid pace, and our ideal candidate gets excited when they are faced with a challenge. What value does this role provide to KGS? Being a part of visualization team, focus on designing KPI’s for decision makers in the business. Create value through optimization opportunities, by measuring progress against goals and past decisions, and through data grounded situational awareness. Why would a candidate want this role? If a candidate is entrepreneurial in the way they approach ideas, Koch is among the most fulfilling organizations they could join. We are growing an analytics capability and looking for entrepreneurial minded innovators who can help us further develop this service of exceptionally high value to our business. Due to the diversity of companies and work within Koch, we are frequently working in new and interesting global business spaces, with data and analytics applications that are unique relative to opportunities from other employers in the marketplace. Define what success looks like Bringing forward innovative and valuable ideas relevant to our mission of creating value through analytics; developing complex analysis products at scale, with high accuracy. Developing valuable, relevant, and accurate analytics products that internal customers use in their decision-making processes. How will this role prepare the candidate for career growth? There are few better places to be able to work with world class tools and unique, value-oriented problem types at this scale. As a result, the successful candidate will develop advanced knowledge of best-in-class tools and techniques, while expanding their capability for analytic, entrepreneurial thinking that will benefit them in achieving their career aspirations. Describe your top performer. Enthusiastically collaborative, value seeking developer whose exceptional technical skills are only surpassed by their appetite for learning and innovation. Open to challenge and be challenged with new ideas and established approaches. Rapidly prototypes analytic approaches that explore for opportunity or meet customer needs, and readily translate into production.

Responsibilities

Work with business partners to understand key business drivers and use that knowledge to experiment and transform Business Intelligence & Advanced Analytics solutions to capture the value of potential business opportunities · Translate a business process/problem into a conceptual and logical data model and proposed technical implementation plan · Assist in developing and implementing consistent processes for data modeling, mining, and production · 5+ years of industry professional experience or a bachelor’s degree in MIS, CS, or an industry equivalent. · At least 5 years of Data Engineering experience (preferably AWS) with strong knowledge in SQL, Python, developing, deploying, and modelling DWH and data pipelines on AWS cloud or similar other cloud environments. · Technical Experience which is required are - Snowflake, Snow Pipe, Snow Park, Big Data, PySpark, Python, SQL, UDF, AWS [ S3, RedShift, · Focus on implementing development processes and tools that allow for the collection of metadata, access to metadata, and completed in a way that allows for widespread code reuse (e.g., utilization of ETL Frameworks, Generic Metadata driven Tools, shared data dimensions, etc.) that will enable impact analysis as well as source to target tracking and reporting · Improve data pipeline reliability, scalability, and security Glue, IAM, EC2, Lambda etc. ] , Git / ADO, CI/CD, Matillion, Talend · 3 - 4+ years of experience with business and technical requirements analysis, elicitation, data modeling, verification, and methodology development with a good hold of communicating complex technical ideas to technical and non-technical team members · Demonstrated experience with SnowFlake and AWS Lambda with python development for provisioning and troubleshooting. What Will Put You Ahead: (experience & education preferred) Other Considerations: (physical demands/ unusual working conditions) · 4+ years’ experience in the Amazon Web Services stack experience including S3, Athena, Redshift, Glue, or Lambda · Preferred Experience in – Technical Expertise – Snowflake, Snow Pipe, Snow Park, Big Data, Spark, Python, SQL, UDF, AWS [ S3, RedShift, Glue, IAM, EC2, Lambda etc. ] , Git / ADO, CI/CD, Matillion, Talend · 4+ years’ experience with cloud data warehousing solutions including Snowflake with developing in and implementation of dimensional modeling · 4+ years’ experience with data visualization and statistical tools like PowerBI, Python, etc. · Development experience with docker and a Kubernetes environment (would be a plus
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Engineer

Job Description

Data Architect sql+Python

Responsibilities

Data Architect sql+Python
  • Salary : Rs. 10,00,000.0 - Rs. 25,00,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Data Architect

Job Description

Skill Set – Kafka + Python + Azure Databricks Yrs of Exp – 8 to 10 yrs

Responsibilities

Skill Set – Kafka + Python + Azure Databricks Yrs of Exp – 8 to 10 yrs
  • Salary : As per industry standard.
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Kafka + Python + Azure Databricks

Job Description

Role Description: Any user who has experience with Alation. This might be rare given the “newness” of the product. If we are lacking that, we could look to have users learn Alation. Once they learn the platform, we will need them to develop solutions in Alation using their APIs (using Python) Broad Scope: Work with Business Owners to complete the custom data ingestion of Terms to support the data dictionaries Work with Business Owners to complete the design of the Terms Page to accommodate all the information required Work with Igloo Tech Lead(s) for linkage to Snowflake data to inform terms and dictionaries

Responsibilities

Role Description: Any user who has experience with Alation. This might be rare given the “newness” of the product. If we are lacking that, we could look to have users learn Alation. Once they learn the platform, we will need them to develop solutions in Alation using their APIs (using Python) Broad Scope: Work with Business Owners to complete the custom data ingestion of Terms to support the data dictionaries Work with Business Owners to complete the design of the Terms Page to accommodate all the information required Work with Igloo Tech Lead(s) for linkage to Snowflake data to inform terms and dictionaries
  • Salary : Rs. 25,000.0 - Rs. 3,50,000.0
  • Industry :IT-Software / Software Services
  • Functional Area : IT Software - Application Programming , Maintenance
  • Role Category :Programming & Design
  • Role :Alation Developer