L

AI Research Internship

Lexsi LabsBangalore• India2h ago
remoteinternshipentry
17 views12 applicants
đź’Ľ Competitive Salary

Job Description

Lexsi Labs is one of the leading frontier labs focusing on building aligned, interpretable and safe Superintelligence. Most of the work involves on creating new methodologies for efficient alignment, interpretability lead-strategies and tabular foundational model research. Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safety. We work on multiple frontier research ideas and challenges. If you are selected, you would be working on one of these following areas. Collaborate closely with our research and engineering teams on one of the areas: Library Development: Architect and enhance open-source Python tooling for alignment, explainability, model alginment, uncertainty quantification, robustness, and machine unlearning Explainability & Trust: Improve and find new observations using our and other SOTA XAI techniques (DLB, LRP, SHAP, Grad-CAM, Backtrace) across text, image, and tabular modalities to understand and present new model interpretability. Mechanistic Interpretability: Probe internal model representations and circuits—using activation patching, feature visualization, and related methods—to diagnose failure modes and emergent behaviors. Uncertainty & Risk: Develop, implement, and benchmark uncertainty estimation methods (Bayesian approaches, ensembles, test-time augmentation) alongside robustness metrics for foundation models. Tabular Foundational Models (Orion): Work with our leading Tabular Foundational Model team to improve and launch new tabular foundational model architectures and work on our leading opesource library TabTune. Reinforcement Learning: Explore new ideas and algorithm around RL and our new RL fine-tuning library. Research Contributions: Author and maintain experiment code, run systematic studies, and co-author whitepapers or conference submissions. General Required Qualifications Strong Python expertise: writing clean, modular, and testable code. Theoretical foundations: deep understanding of machine learning and deep learning principles with hands-on experience with PyTorch. Transformer architectures & fundamentals: comprehensive knowledge of attention mechanisms, positional encodings, tokenization and training objectives in BERT, GPT, LLaMA, T5, MOE, Mamba, etc. Version control & CI/CD: Git workflows, packaging, documentation, and collaborative development practices. Collaborative mindset: excellent communication, peer code reviews, and agile teamwork. Preferred Domain Expertise (Any one of these is good) : Explainability: applied experience with XAI methods such as DLB, SHAP, LIME, IG, LRP, DL-Bactrace or Grad-CAM. Mechanistic interpretability: familiarity with circuit analysis, activation patching, and feature visualization for neural network introspection. Uncertainty estimation: hands-on with Bayesian techniques, ensembles, or test-time augmentation. Quantization & pruning: applying model compression to optimize size, latency, and memory footprint. LLM Alignment techniques: crafting and evaluating few-shot, zero-shot, and chain-of-thought prompts; experience with RLHF workflows, reward modeling, and human-in-the-loop fine-tuning. Tabular Foundational Models: Should have used or improved TFMs like Orion, TabPFN, TabICL etc Post-training adaptation & fine-tuning: practical work with full-model fine-tuning and parameter-efficient methods (LoRA, adapters), instruction tuning, knowledge distillation, and domain-specialization. Additional Experience (Nice-to-Have) Publications: contributions to CVPR, ICLR, ICML, KDD, WWW, WACV, NeurIPS, ACL, NAACL, EMNLP, IJCAI or equivalent research experience. Open-source contributions: prior work on AI/ML libraries or tooling. Domain exposure: risk-sensitive applications in finance, healthcare, or similar fields. Performance optimization: familiarity with large-scale training infrastructures.

Requirements

  • Strong Python expertise: writing clean, modular, and testable code.
  • Theoretical foundations: deep understanding of machine learning and deep learning principles with hands-on experience with PyTorch.
  • Transformer architectures & fundamentals: comprehensive knowledge of attention mechanisms, positional encodings, tokenization and training objectives in BERT, GPT, LLaMA, T5, MOE, Mamba, etc.
  • Version control & CI/CD: Git workflows, packaging, documentation, and collaborative development practices.
  • Collaborative mindset: excellent communication, peer code reviews, and agile teamwork.
  • Preferred Domain Expertise (Any one of these is good) :
  • Explainability: applied experience with XAI methods such as DLB, SHAP, LIME, IG, LRP, DL-Bactrace or Grad-CAM.
  • Mechanistic interpretability: familiarity with circuit analysis, activation patching, and feature visualization for neural network introspection.
  • Uncertainty estimation: hands-on with Bayesian techniques, ensembles, or test-time augmentation.
  • Quantization & pruning: applying model compression to optimize size, latency, and memory footprint.

About Lexsi Labs

AI-POWERED

Resume Reviewer

Transform your resume with AI-powered insights and land your dream job

98%
Match Rate
3x
More Interviews
ATS-friendly optimization
Instant feedback & scoring
Industry-specific suggestions
Professional formatting tips
Analyze My Resume
Free to use
Instant results

Trending Jobs

A

Full Stack Developer Intern

Asuraa
Bengaluru
₹25K+
5d ago116 applicants
A

AI Summer Intern

Aryma Labs
Bengaluru
₹40K - ₹60K
1mo ago1 applicants
O

Interns

OFF/BEAT
Gurgaon, India
Competitive Salary
1w ago0

Ready to Start Your Journey?

Join thousands of professionals who found their dream job through our platform.