The Team: The IP Security team safeguards S&P Global’s proprietary assets—source code, models, datasets, design artifacts, architecture, and know‑how across applications and infrastructure. We set the standards, controls, and guardrails that prevent leakage, misuse, or unauthorized commercialization of intellectual property, and we enable product teams to innovate quickly while staying compliant with enterprise policy and regulatory obligations.
Key Responsibilities:
Develop and implement comprehensive AI/ML security strategies, policies, standards, and guidelines to protect organizational assets and ensure the secure operation of AI and ML systems
Build security control framework and generic reference architectures for GenAI applications.
Assist with identifying security requirements to be followed by LoB/Dev teams when building GenAI applications.
Conduct threat modeling exercises to identify potential security risks and vulnerabilities in AI systems, working closely with AI development teams to integrate security into the design and development processes.
Provide thought leadership and creativity to mature Gen AI security governance embedding into our existing cyber security risk appetite framework.
Perform security assessments on AI applications and systems to identify and address vulnerabilities. Develop and implement testing methodologies to evaluate the security posture of AI models and frameworks.
Develop configuration hardening guidelines for Cloud Services including native Generative AL/ML services such as AWS SageMaker, SageMaker Notebooks, Bedrock, Kendra, OpenSearch, Lambda, Azure Cognitive Services, Open AI, GCP Vertex AI etc.
Stay updated on relevant regulations and standards related to AI security and ensure compliance. Collaborate with legal and compliance teams to align AI systems with industry and regulatory requirements.
Core Skills Required:
Strong programming experience in Python (preferred) or equivalent languages
Solid understanding of LLM / GenAI fundamentals: prompting, embeddings, vector search, RAG, and basic agentic patterns (tool use, planning, orchestration).
Experience running production systems or data pipelines on AWS / Azure / GCP, using containers, serverless, and managed storage/services.
Hands-on familiarity with observability tools (OpenTelemetry, Prometheus, Grafana, ELK, etc.) across logs, metrics, and traces.
Comfort working with structured and unstructured data; strong SQL plus experience with Pandas / Spark / dbt or similar frameworks.
Ability to reason clearly about reliability, performance, and cost trade-offs.
Strong collaboration and communication skills; ability to translate complex concepts for platform, product, data, security, and compliance teams.
Qualifications:
1–5 years of experience in cyber security, software engineering, data engineering, ML engineering, data science.
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or equivalent practical experience.
Experience with CI/CD, code reviews, and modern engineering best practices.
Nice to Have:
Exposure to agentic AI frameworks (LangChain, LangGraph, OpenAI Agents, etc.)
Experience with LLM observability, eval frameworks, or prior work on production LLM/agent systems.
What We're Looking For
Beyond skills and experience, we want engineers who:
Build for scale: Think like platform builders and design systems that work across teams, not just for today’s use case.
Lead with observability: Instrument first, debug with data, and deliver dashboards that reveal the truth.
Ship safely: Never deploy without guardrails or validations, even if it adds upfront effort.
Make thoughtful trade-offs: Clearly articulate decisions around cost, quality, latency, and reliability.
Own the end-to-end stack: Move comfortably between data pipelines, agent logic, infrastructure, and production monitoring.
Learn through experimentation: Test ideas, study failures, iterate rapidly, and improve continuously.
Communicate with impact: Explain complex AI concepts in simple, business-relevant terms to technical and non-technical stakeholders.
Stay ahead of the curve: Actively explore emerging technologies like LangGraph, agentic frameworks, and new LLM capabilities.