CS498: Algorithmic Engineering. Final Project (30%)

Overview

The final project is your chance to go deep. Pick a topic connected to what we have covered in class (or will cover) – LP/IP solvers, convex optimization, metaheuristics, SAT/SMT, or Machine Learning Systems – and either implement, benchmark, or extend something substantial. The goal is to practice the engineering side of algorithms: build a system, measure it, compare it to baselines, and write it up clearly.

This is an individual project.

Project Tracks

Choose one of the following three directions.

Track A: Reproduce a Research Paper (and potentially extend it).

Pick a paper from a venue like ALENEX, SEA, Mathematical Programming Computation, INFORMS Journal on Computing, NeurIPS, ICML, or similar. Reproduce the main algorithm, implement one or more baselines, and compare performance (speed, solution quality, or both) on benchmark instances.

What we are looking for: faithful implementation, fair baselines, clear experimental methodology, and honest discussion of results (including when the paper’s method does not win).

If you find an existing open-source implementation of the paper’s algorithm, reproduce (some) of its main results, then (optionally) extend it in a meaningful way: new datasets, an algorithmic tweak, new baseline/heuristic to compare against, a different solver backend, or a head-to-head comparison the original authors did not include.

For example, one idea of a project is you can try implementing the algorithm from this paper, and benchmarking it against vanilla KNN. Adding new test instances, and measuring more than just the speed (for example memory consumption of different algorithms, and various heuristics you can think of).

Track B: Deep Dive into a Course Topic

Take a topic we covered in class and go deeper. Read recent papers and heuristics, implement them, and present a comparative study. Examples:

One example here of a project is you could write a report surveying the various branching techniques used in modern branch-and-bound solvers, then benchmark them on open-source test instances to see how quickly they converge across different inputs, and optionally analyze which techniques come out on top.

Track C: Solve a New Problem

Use one of the methods from class (or a related method) to solve a problem that interests you. Model it, solve it, and compare approaches. Examples:

This track is the most open-ended. The key is having a clear problem, a clear method, and a clear comparison.

One example I’m personally excited about is writing a solver for Cities: Skylines (a game). My spouse is obsessed with this game, and we’ve both been curious about what a solver would look like. The combinatorial explosion of the action space makes exact solving virtually impossible, so it would be fascinating to design a heuristic that can actually play the game! Alternatively, you could study a scheduling problem you encounter in real life, similar to what Fleetline does, and write a custom solver for it. It’s really up to you!


Deliverables

1. Proposal (1 page max)

Due: April 3 2026

A short document covering:

You don’t have to have everything ready; just a solid-ish plan. Things can go wrong (it’s okay!), just keep us updated if your plan changes.

Submit as a PDF on Gradescope.

2. Final Submission

Due: May 18 2026

Two components:

Report (5 pages max, PDF). Use any reasonable format (LaTeX preferred). The report should include:

Code (GitHub link, optional but strongly encouraged). If you implemented new algorithms or ran experiments:

Submit the report on Gradescope. Include the GitHub link in the report.


Suggested Papers and Topics

Below are starting points organized by course topic. You are welcome to choose papers not on this list (especially if you go through Track C!).

Branch-and-Bound and IP Solver Engineering

Cutting Planes

TSP

Network Flow and Min-Cut

LP Solver Engineering

MIP Modeling and Formulation

SDP and Max-Cut

Convex Optimization Engineering

SAT/SMT Engineering

ML + Combinatorial Optimization


Grading Rubric

ComponentWeightWhat we look for
Problem selection and motivation10%Clear, well-scoped, connected to course material
Technical depth30%Correct implementation, understanding of the method
Experimental evaluation30%Fair baselines, meaningful metrics, sufficient instances
Writing and presentation20%Clear writing, good figures/tables, honest discussion
Code quality (if applicable)10%Reproducible, documented, readable

Tips

Questions?

Come to office hours or post on the course forum.