Chris (Yuhao) Liu

yliu298 [at] ucsc [dot] edu

I am a MSc student in Computer Science and Engineering at the University of California, Santa Cruz. I also work as a researcher at Professor Jeffrey Flanigan's JLab.

My research interests broadly lie in understanding generalization in deep learning, computational neuroscience, and the intersection of the two. My goal is to understand, to what extent, we can leverage existing knowledge about the brain to build human-level intelligent systems.

My current research focuses on explaining the generalization behavior in deep neural networks. Previously, I worked on the scaling law between the amount of training data and the generalization performance of deep neural networks (aka the sample complexity rate).

Previously, I obtained my B.S. in Computer Science and Engineering at UC Santa Cruz.

Blog  /  CV  /  CV of Failure  /  Email  /  Github  /  LinkedIn

profile photo
News
  • [2022-01] I will TA CSE 144 Applied Machine Learning in Winter 2022.
  • [2021-09] I will serve as a teaching assistant for CSE 20 Beginning Programming in Python in Fall 2021.
  • [2021-06] I will (re)join UCSC as a MSc student.
  • [2020-09] I am thrilled to tutor the course CSE142 Machine Learning in Fall 2020 at UCSC.
  • [2020-06] I joined Professor Jeffrey Flanigan's JLab.
Research

These included publications and preprints.

Faster Sample Complexity Rates With Ensemble Filtering
Chris Yuhao Liu, Jeffrey Flanigan
2021
In submission

We present a dataset filtering approach that uses sets of classifiers, similar to ensembling, to estimate noisy (or non-realizable) examples and exclude them so a faster sample complexity rate is achievable in practice.

Sample Complexity Scaling Laws For Adversarial Training
Chris Yuhao Liu
2021

We show that adversarially training (Fast Gradient Sign Method and Projected Gradient Descent) reduces the empirically sample complexity rate for MLP and a variety of CNN architectures on MNIST and CIFAR-10.

Other Projects

These include coursework and side projects.

TAPT: Text Augmentation Using Pre-Trained Transformers With Reinforcement Learning
UC Santa Cruz
2020-07

A distilled RoBERTa model as a text classifier and a GPT-2 (345M) as a text generator trained using the proximal policy optimization (PPO) framework

Conditional Generation of Research Paper Abstracts with GPT-2
UC Santa Cruz
2020-06

A GPT-2 (774M) trained using all research paper titles and abstracts under cs.AI, cs.LG, cs.CL, and cs.CV on arXiv

This project was the winner of the Image/Text Generation Competition for the course CSE142 Machine Learning in Spring 2020.

Sentiment Analysis With Transformers
UC Santa Cruz
2020-06

A RoBERTa (355M) model using the IMDb dataset

This project was the winner of the Sentiment Analysis Competition for the course CSE142 Machine Learning in Spring 2020.

Service


This is a fork of Jon Barron's website.