Data Scientist Vision-Language Models (VLMs) - Cardinal Integrated Technologies Inc
Posted 2025-10-26
Remote, USA
Full Time
Immediate Start
[ad_1] Position: Data Scientist Vision-Language Models (VLMs) Location: San Ramon, CA or Milwaukee, WIDuration: Full-timeKey ResponsibilitiesVLM Development, Pose estimation & Deployment:Design, train, and deploy efficient Vision-Language Models (e.g., VILA, Isaac Sim) for multimodal applications including image captioning, visual search, and document understanding, pose understanding, pose comparison. Develop and manage Digital Twin frameworks using AWS IoT TwinMaker, SiteWise, and Greengrass to simulate and optimize real-world systems. Develop Digital Avatars using AWS services integrated with 3D rendering engines, animation pipelines, and real-time data feeds. Explore cost-effective methods such as knowledge distillation, modal-adaptive pruning, and LoRA fine-tuning to optimize training and inference. Implement scalable pipelines for training/testing VLMs on cloud platforms (AWS services such as SageMaker, Bedrock, Rekognition, Comprehend, and Textract.)NVIDIA Platforms:Should develop a blend of technical expertise, tool proficiency, and domain- specific knowledge on below NVIDIA Platforms:NIM (NVIDIA Inference Microservices): Containerized VLM deployment. NeMo Framework: Training and scaling VLMs across thousands of GPUs. Supported Models: LLaVA, LLaMA 3.2, Nemotron Nano VL, Qwen2-VL, Gemma 3. DeepStream SDK: Integrates pose models like TRTPose and OpenPose, Real-time video analytics and multi-stream processing. Multimodal AI Solutions:Develop solutions that integrate vision and language capabilities for applications like image-text matching, visual question answering (VQA), and document data extraction. Leverage interleaved image-text datasets and advanced techniques (e.g., cross-attention layers) to enhance model performance. Image Processing and Computer VisionDevelop solutions that integrate Vision based deep learning models for applications like live video streaming integration and processing, object detection, image segmentation, pose Estimation, Object Tracking and Image Classification and defect detection on medical Xray imagesKnowledge of real-time video analytics, multi-camera tracking, and object detection. Training and testing the deep learning models on customized dataEfficiency Optimization:Evaluate trade-offs between model size, performance, and cost using techniques like elastic visual encoders or lightweight architectures. Benchmark different VLMs (e.g., GPT-4V, Claude 3.5, Nova Lite) for accuracy, speed, and cost-effectiveness on specific tasks. Benchmarking on GPU vs CPUCollaboration & Leadership:Collaborate with cross-functional teams including engineers and domain experts to define project requirements. Mentor junior team members and provide technical leadership on complex projects. Experience: -Location: -San Ramon, CA or Milwaukee, WI (Onsite)QualificationsEducation: Master s or Ph. D. in Computer Science, Data Science, Machine Learning, or a related field. Experience:Minimum of 10+ years of experience in Machine Learning or Data Science roles with a focus on Vision-Language Models. Proven expertise in deploying production-grade multimodal AI solutions. Experience in self driving cars and self navigating robots. Technical Skills:Proficiency in Python and ML frameworks (e.g., PyTorch, TensorFlow). Hands-on experience with VLMs such as VILA, Isaac Sim, or VSS. Familiarity with cloud platforms like AWS SageMaker or Azure ML Studio for scalable AI deployment. OpenCV, PIL, scikit-imageFrameworks: PyTorch, TensorFlow, KerasCUDA, cuDNN3D vision: point clouds, depth estimation, LiDARSoft Skills:Strong problem-solving skills with the ability to optimize models for real-world constraints. Excellent communication skills to explain technical concepts to diverse stakeholders. Preferred TechnologiesVision-Language Models: VILA, Isaac Sim, EfficientVLMCloud Platforms: AWS SageMaker, BedrockOptimization Techniques: LoRA fine-tuning, modal-adaptive pruningMultimodal Techniques: Cross-attention layers, interleaved image-text datasetsMLOps Tools: Docker, MLflowEmployers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity. [ad_2] Apply to this job