Job Description Summary
GE HealthCare is accelerating its transformation through a series of strategic “AI Big Bets” in Commercial excellence, Logistics optimization, Inventory management, and Manufacturing innovation.
The Enterprise AI team, part of the Chief Data and Analytics Office, is at the forefront of delivering robust, enterprise-grade AI and ML solutions that drive measurable business impact at scale.
As the Staff AI Application Engineer, you will be at the forefront of developing and delivering innovative GenAI and Agentic AI solutions that generate actionable business insights and transform key areas within GE HealthCare, including Finance, Commercial, Supply Chain, Quality, Operational Excellence and Lean, and Manufacturing. We are seeking a highly skilled and motivated AI Application Engineer to join our dynamic team.
You will play a pivotal role in shaping and executing our AI strategy. You'll collaborate across a unified, cross-functional delivery organization - partnering with experts in data engineering, ML engineering, analytics, and GenAI development - to solve complex business challenges and deliver scalable solutions.
Job Description
Core Responsibilities:
Design and Develop: AI-powered applications, integrating machine learning and generative models into enterprise-grade software products and internal tools. Owning the full software development lifecycle (SDLC), including unit, integration, and end-to-end testing.
Frontend: Developing modern, intuitive interfaces for AI applications (React/Next.js, TypeScript, or equivalent) with a strong focus on usability, accessibility, and AI explainability.
Backend: Implementing scalable and secure back-end services (FastAPI, Flask, or Node.js) to expose AI capabilities (LLMs, RAG pipelines, AI agents) through standardized APIs.
Translating data science prototypes and GenAI models (LLMs, diffusion models, transformers) into scalable applications or services with intuitive user interfaces and reliable back-end infrastructure.
Collaborating with Insight Leaders and business stakeholders on requirements gathering, project documentation, and development planning.
Partnering with MLOps and GenAIOps teams to deploy, monitor, and continuously improve AI applications within standardized CI/CD pipelines.
Designing and implementing integrations using REST, GraphQL, and gRPC; work with cloud-based AI APIs (Azure, AWS, GCP) and enterprise data sources.
Integrating cloud-native AI services (AWS Bedrock, Azure OpenAI) and open-source frameworks (LangChain, LangGraph) into enterprise environments.
Monitoring application performance and user adoption, iterating on models and workflows to enhance usability and business impact.
Optimizing application performance, infrastructure efficiency, and LLM utilization.
Documenting architectures , APIs, and deployment processes to ensure transparency, reusability, and maintainability.
Requirements
Education: Master's or PhD degree (or equivalent experience) in Computer Science, Software Engineering, Artificial Intelligence, or related STEM field.
Experience: hands-on experience developing and deploying AI-powered or data-driven applications in enterprise environments.
Advanced proficiency in Python , plus strong working knowledge of TypeScript/JavaScript and at least one modern web framework (React, Next.js, FastAPI, Flask).
Proven track record implementing end-to-end AI systems , integrating ML/LLM models into scalable microservices or enterprise applications.
Strong experience in ML/GenAI frameworks (TensorFlow, PyTorch, LangChain, AutoGen, Semantic Kernel) and cloud-native AI platforms (AWS Bedrock, Azure OpenAI).
Working knowledge of cloud environments (AWS, Azure, or GCP) and containerization tools ( Docker ).
Deep experience with Docker , Kubernetes , and CI/CD automation for AI workloads.
Demonstrated experience with RAG pipelines, vector databases , and document retrieval frameworks.
Solid understanding of LLMOps / GenAIOps integration patterns , model evaluation, and prompt optimization workflows.
Strong collaboration skills and the ability to communicate effectively within cross-functional teams.
Ability to mentor junior engineers , perform code reviews, and contribute to architectural decisions.
Strong problem-solving, debugging, and analytical skills, with clear and persuasive communication to technical and business audiences.
#LI-MT1
#LI-Hybrid
Pay Range
For Poland based positions, Annual Salary Range:287,200.00PLN-394,900.00PLN
Placement within this range depends on:
Relevant skills and qualifications
Prior job-related experience
Internal equity considerations (alignment with colleagues in similar roles) e.t.c.
We review pay ranges regularly to ensure they remain competitive with the external market and align with our internal equity considerations.
Benefits & Rewards
In addition to base salary, our employees have access to a comprehensive package of benefits and allowances, which may include:
Health & wellness coverage
Retirement and or savings plans
Allowances or benefits to support role requirements (e.g., mobility, transport, or role-specific needs such as a company car or allowance where applicable)
Work-life balance support (e.g., flexible working, leave programs)
Recognition and incentive programs aligned with performance and company success
The exact benefits package depends on the role, location, and employment terms as specified in the Colleague Value Proposition document that will be shared prior to the interview or at the offer discussion stage.
Additional Information
Relocation Assistance Provided: Yes