Chris (Yuhao) Liu

yliu298 [at] ucsc [dot] edu

I am a MSc student in Computer Science and Engineering at the University of California, Santa Cruz. I am also a researcher at Professor Jeffrey Flanigan's lab, a place where I have enjoyed working for the last two and a half years. I will apply for PhD in Fall 2022.

My research interests broadly lie in {deep learning} ∪ {computational neuroscience}. My goal is to leverage knowledge in both machine learning and brain learning to build (super?)-human-level intelligent systems that can learn (adaptively ∧ continuously ∧ with interpretability).

My current research focuses on fundamental problems in deep learning. I am particularly interested in demystifying the generalization behavior in large and small neural networks (e.g., double descent, expressive capacity of large networks, data and model scaling laws, learning noise, lottery tickets, out-of-distribution generalization, neural tangent kernel, etc.). I am also working with Professor Yang Liu on unifying bias in data and machine.

Previously, I obtained my B.S. in Computer Science and Engineering at UC Santa Cruz.

Blog  /  CV  /  CV of Failure  /  Email  /  Github  /  LinkedIn

Profile photo
News
  • [2022-06] I started an internship with Professor Yang Liu.
Previous events
  • [2021-06] I will (re)join UCSC as a MSc student.
  • [2020-06] I joined Professor Jeffrey Flanigan's JLab.
Research

These included publications and preprints.

What Affects the Sample Complexity in Practice?
Chris Yuhao Liu, Jeffrey Flanigan
2022

We empirically estimate the power-law exponents of various model architectures and study how they are altered by a wide range of training conditions for classification.

Faster Sample Complexity Rates With Ensemble Filtering
Chris Yuhao Liu, Jeffrey Flanigan
2021

We present a dataset filtering approach that uses sets of classifiers, similar to ensembling, to estimate noisy (or non-realizable) examples and exclude them so a faster sample complexity rate is achievable in practice.

Other Projects

These include coursework and side projects.

Learning to Extract Compact Vector Representations from Weight Matrices
Chris Yuhao Liu
2022
[Code]

We study the problem of learning to construct compact representations of neural network weight matrices by projecting them into a smaller space.

Understanding biased datasets and machines requires rethinking bias from scratch
Chris Yuhao Liu, Yuhang Gan, Zichao Li, Ruilin Zhou
2022

We surveyed recent works on dataset bias and machine learning bias.

Sample Complexity Scaling Laws For Adversarial Training
Chris Yuhao Liu
2021

We show that adversarially training (Fast Gradient Sign Method and Projected Gradient Descent) reduces the empirically sample complexity rate for MLP and a variety of CNN architectures on MNIST and CIFAR-10.

TAPT: Text Augmentation Using Pre-Trained Transformers With Reinforcement Learning
2020
[Code]

A distilled RoBERTa model as a text classifier and a GPT-2 (345M) as a text generator trained using the proximal policy optimization (PPO) framework

Conditional Generation of Research Paper Abstracts with GPT-2
2020
[Code]

A GPT-2 (774M) trained using all research paper titles and abstracts under cs.AI, cs.LG, cs.CL, and cs.CV on arXiv

This project was the winner of the Image/Text Generation Competition for the course CSE142 Machine Learning in Spring 2020.

Sentiment Analysis With Transformers
2020
[Code]

A RoBERTa (355M) model using the IMDb dataset

This project was the winner of the Sentiment Analysis Competition for the course CSE142 Machine Learning in Spring 2020.

Service


This is a fork of Jon Barron"s website.