Skip to main content

Engagements tagged machine-learning

AI for Business
San Diego State University

The research focus is to apply the pre-training techniques of Large Language Models to the encoding process of the Code Search Project, to improve the existing model and develop a new code searching model. The assistant shall explore a transformer or equivalent model (such as GPT-3.5) with fine-tuning, which can help achieve state-of-the-art performance for NLP tasks. The research also aims to test and evaluate various state-of-the-art models to find the most promising ones.

Status: Complete
Bayesian nonparametric ensemble air quality model predictions at high spatio-temporal daily nationwide  1 km grid cell
Columbia University

I aim to run a Bayesian Nonparametric Ensemble (BNE) machine learning model implemented in MATLAB. Previously, I successfully tested the model on Columbia's HPC GPU cluster using SLURM. I have since enabled MATLAB parallel computing and enhanced my script with additional lines of code for optimized execution. 

I want to leverage ACCESS Accelerate allocations to run this model at scale.

The BNE framework is an innovative ensemble modeling approach designed for high-resolution air pollution exposure prediction and spatiotemporal uncertainty characterization. This work requires significant computational resources due to the complexity and scale of the task. Specifically, the model predicts daily air pollutant concentrations (PM2.5​ and NO2 at a 1 km grid resolution across the United States, spanning the years 2010–2018. Each daily prediction dataset is approximately 6 GB in size, resulting in substantial storage and processing demands.

To ensure efficient training, validation, and execution of the ensemble models at a national scale, I need access to GPU clusters with the following resources:

  • Permanent storage: ≥100 TB
  • Temporary storage: ≥50 TB
  • RAM: ≥725 GB

In addition to MATLAB, I also require Python and R installed on the system. I use Python notebooks to analyze output data and run R packages through a conda environment in Jupyter Notebook. These tools are essential for post-processing and visualization of model predictions, as well as for running complementary statistical analyses.

To finalize the GPU system configuration based on my requirements and initial runs, I would appreciate guidance from an expert. Since I already have approval for the ACCESS Accelerate allocation, this support will help ensure a smooth setup and efficient utilization of the allocated resources.

Status: Complete
Surgical Video Understanding using Video LLM
UCSC

This project aims to develop a deep learning-based system for analyzing surgical videos using multimodal LLM. The scope includes detecting surgical phases, recognizing instruments, identifying anomalies, and generating real-time or post-operative summaries. Expected outcomes include improved surgical workflow analysis, automated documentation, and enhanced training for medical professionals.

The project will explore state-of-the-art Video LLM architectures and develop new model specific for the surgical video understanding, along with software packages such as PyTorch, TensorFlow, OpenCV, and Hugging Face’s Transformers. The research need is to improve the interpretability and efficiency of surgical video analysis, leveraging multimodal learning to combine visual and textual understanding.

We need high-performance computing (HPC) clusters, large-scale storage, and GPU accelerators will be leveraged to train and fine-tune the models efficiently.

Status: On Hold
A brainwide “universal translator” for neural dynamics at single-cell, single-spike resolution
Columbia University

In this project, our primary goal is to develop a multimodal foundation model of the brain by combining large-scale, self-supervised learning with the IBL brainwide dataset. This model aims to serve as a "universal translator," facilitating automatic translation from neural activity to various outputs such as behavior, brain location, neural dynamics prediction, and information flow prediction. To achieve this, we will leverage ACCESS computational resources for model training, fine-tuning, and testing. These resources will support the computation-intensive tasks involved in training large-scale deep learning models on distributed GPUs, as well as processing and analyzing the extensive dataset. Additionally, we will utilize software packages tailored for deep learning to implement our algorithms and models effectively. Ultimately, the project's outcome will be shared as an open-source model, serving as a valuable resource for global neuroscience research and the development of brain-computer interfaces. With ACCESS resources, we aim to accelerate the advancement of neuroscience and enable broader participation in brain-related research worldwide.

Status: Declined