Chris (Yuhao) Liu

yliu298 [at] ucsc [dot] edu

I am a MSc student in Computer Science and Engineering at the University of California, Santa Cruz. I also work as a researcher at Professor Jeffrey Flanigan"s JLab.

My research interests broadly lie in understanding generalization in deep learning, computational neuroscience, and the intersection of the two. My goal is to understand, to what extent, we can leverage existing knowledge about the brain to build human-level intelligent systems.

My current research focuses on explaining the generalization behavior in deep neural networks. Previously, I also worked on understanding the scaling law between the training data size and the generalization performance of deep neural networks (aka the sample complexity rate).

Previously, I obtained my B.S. in Computer Science and Engineering at UC Santa Cruz.

Blog  /  CV  /  CV of Failure  /  Email  /  Github  /  LinkedIn

profile photo
News
  • [2022-03] I will TA CSE 20 again in Spring 2022.
  • [2022-01] I will TA CSE 144 Applied Machine Learning in Winter 2022.
  • [2021-09] I will serve as a teaching assistant for CSE 20 Beginning Programming in Python in Fall 2021.
Research

These included publications and preprints.

What Affects the Sample Complexity in Practice?
Chris Yuhao Liu, Jeffrey Flanigan
2022

We empirically estimate the power-law exponents of various model architectures and study how they are altered by a wide range of training conditions for classification.

Faster Sample Complexity Rates With Ensemble Filtering
Chris Yuhao Liu, Jeffrey Flanigan
2021

We present a dataset filtering approach that uses sets of classifiers, similar to ensembling, to estimate noisy (or non-realizable) examples and exclude them so a faster sample complexity rate is achievable in practice.

Other Projects

These include coursework and side projects.

Learning to Extract Compact Vector Representations from Weight Matrices
Chris Yuhao Liu
2022

We study the problem of learning to construct compact representations of neural network weight matrices by projecting them into a smaller space.

Understanding biased datasets and machines requires rethinking bias from scratch
Chris Yuhao Liu, Yuhang Gan, Zichao Li, Ruilin Zhou
2022

We surveyed recent works on dataset bias and machine learning bias.

Sample Complexity Scaling Laws For Adversarial Training
Chris Yuhao Liu
2021

We show that adversarially training (Fast Gradient Sign Method and Projected Gradient Descent) reduces the empirically sample complexity rate for MLP and a variety of CNN architectures on MNIST and CIFAR-10.

TAPT: Text Augmentation Using Pre-Trained Transformers With Reinforcement Learning
UC Santa Cruz
2020-07

A distilled RoBERTa model as a text classifier and a GPT-2 (345M) as a text generator trained using the proximal policy optimization (PPO) framework

Conditional Generation of Research Paper Abstracts with GPT-2
UC Santa Cruz
2020-06

A GPT-2 (774M) trained using all research paper titles and abstracts under cs.AI, cs.LG, cs.CL, and cs.CV on arXiv

This project was the winner of the Image/Text Generation Competition for the course CSE142 Machine Learning in Spring 2020.

Sentiment Analysis With Transformers
UC Santa Cruz
2020-06

A RoBERTa (355M) model using the IMDb dataset

This project was the winner of the Sentiment Analysis Competition for the course CSE142 Machine Learning in Spring 2020.

Service


This is a fork of Jon Barron"s website.