CV

Table of contents

General Information

Name Dwiref Oza
Phone (646) 249-7512
E-mail(s) mithrandir.dso@gmail.com, dwiref.oza@columbia.edu
Summary Deep Learning Engineer with a fondness for Robots and Computer Vision. Almost broke production once. (And then I didn't.)

Experience

  • Mar '24 – Present
    Lead Engineer
    Gordian | Remote
    • Leading the research and development of Gordian Sense, a Computer Vision based cloud SaaS capable of analyzing store shelves and delivering insights to improve inventory management.
    • Software Architect
      • Led the architectureal design and development of alexander - a modular framework for deploying computer vision pipelines in the cloud via configurable plugins for multiple deep learning models.
      • Key design feature include the ability to run plugins asyncrhonously via a Producer/Consumer execution paradigm, and an event-logger with appropriate log levels for critical errors, warnings and routine runtime info.
      • Defined template plugins for the engineering team to adopt and implement. This allowed for plug-and-play use of cutting edge deep learning models without breaking the pipeline.
    • Engineering Team Lead
      • Established sound software practices by encouraging the engineering team to strive for full test coverage, clean code and detail-oriented PR etiquette.
      • Led and managed a team of computer vision, MLOps and backend engineers.
      • Brought in transparency and boosted pace of development by adopting scrum ceremonies - daily standups, task planning, reviews and retros.
      • Led regular code reviews to ensure maintainability and reproducibility
    • Skills & Tools
      • Python, Bash, PyTorch, Docker, AWS {S3, Batch, EC2} Open3D, Numpy/Scipy, sk-learn
      • Git, Agile, Scrum, Unit Testing, CI/CD, GitHub Actions
  • Feb '23 – Feb 2024
    Computer Vision Engineer II (L4)
    Path Robotics | Columbus, OH
    • Worked on Path's Adaptive Welding product - Multi-Pass Adaptive Fill
    • Rapid prototyping and deployment of feature requests, unlocking welding patterns and stability improvements.
    • Successfully led several RaaS (Robotics-as-a-Service) deployments of autonomous welding cells worth millions in revenue.
    • Model Lifecycle Management - data engineering, data labeling, data science, MLOps, deployment
      • Conceptualized machine learning model to capture and generalize human welder preferences for multi-pass welding.
      • Handled data collection, hosting, data labeling, cleanup, and setup ETL (Extract/Transform/Load) pipeline.
      • model architecture selection, training, A/B testing, field trial and alpha deployment
      • Created a proof-of-concept for model serving and tracking with MLFlow and Databricks
      • Deployed model to cull search space for a uniform grid search algorithm. Translated to 60% execution time reduction for weld optimization.
    • Skills & Tools
      • Python, C++, Bash, PyTorch, AWS CLI, EC2, S3, Databricks, Open3D, Shapely, Numpy/Scipy, sk-learn, PCL, MLFlow
      • Git, Agile, Scrum, Unit Testing, CI/CD
  • Feb '22 – Feb 2023
    Computer Vision Engineer (L3)
    Path Robotics | Columbus, OH
    • Perception Team
      • Wrote a point cloud registration benchmark utilizing simple raycasting in Open3D.
      • Co-developed a 3D point cloud stitching algorithm using relative transforms from point-to-plane ICP to build a pose graph. Pruning false edges with uncertain alignment yielded true alignment between point clouds.
      • Assumed code ownership of the point cloud stitching ROS service. Handled field error reports and bugfixes.
    • Adaptive Welding Team
      • Wrote an algorithm designed to find the closest interaction point for 2 near-orthogonal non-planar surfaces. (2D cross-section of a mesh).
      • Spearheaded a major refactor of Path's adaptive welding software stack. This transitioned the software from pre-release to true production-ready status, improving maintainability and stability. This kept the release stable in production until version 2 was developed.
    • Skills & Tools
      • Python, C++, Bash, Open3D, Shapely, Numpy/Scipy, sk-learn, PCL
      • Git, Agile, Scrum, Unit Testing, CI/CD
  • May '21 - Feb 2022
    Machine Learning Engineer
    Streamn Inc | Cupertino, CA
    • Rescued a prototype IPTV capturing backend with expert use of FFMPEG, Docker and cron management. Reduced server outages by 20%, dropping the recording failure rate from 40% to 7%.
    • Worked on video-scene change detection with graph-based community detection.
  • Mar '20 – Jan 2021
    Research Assistant
    COSMOS Project | New York, NY
    • Cloud Enhanced Open Software Defined Mobile Wireless Testbed for City-Scale Deployment (COSMOS) - part of the NSF PAWR program, partially funded by NSF award CNS 1827923.
    • Advisor - Prof. Zoran Kostic
    • Smart City Intersections
      • Led an analytical study† on real-time object tracking with YOLOv4, to define and measure the effect of three parameters against inference mean average precision (mAP) - scene complexity/object density, video resolution, and framerate. This work was published in proceedings of the 19th IEEE International Conference on Smart City, Dec 2021.
      • Explored accelerated Mask R-CNN inference with TensorRT and NVIDIA DeepStream. Performance differentiators were identified through CUDA profiling.
  • Jun '18 - Jul 2019
    Research Associate
    Spectrum Lab, Indian Institute of Science
    • PI - Prof. Chandra Sekhar Seelamantula
    • Medical Advisor - Dr. M. L. Murali Krishna
    • Image Processing Project
      • Wrote an image processing library in Python to pre-process medical image data (retinal images, ultrasound images).
      • This work focused on multi-scale image processing (gaussian pyramids) and an image structure tensor derived from the Riesz Transform.
    • Deep Learning Projects
      • Developed a segmentation system using U-Nets to identify features of pathological interest in fundus images (i.e. of the retina), such as vasculature, lipid deposits, optic disc etc.
      • Co-created a 3-stage Diabetic Macular Edema (DME) Severity prediction tool, utilizing U-Net segmentation to localize hard exudates and lipid deposits on the retina. The relative proximity and frequency of these exudates to the fovea determines the clinical severity of DME.
    • Graphic Design Project
      • Created a cohesive design language using M1 Material Design principles for a pre-screening app for retinal image scans. This app was designed to be used by ophthalmologists and eye surgeons to screen medical images for pathologies using Spectrum Lab's research-backed detection algorithms.
      • This app was eventually demoed (with the same design) by Spectrum Lab at the G20 Science Summit in India (2023).

Education