Course project

All students will write a mini-paper as a final project—empirical, theoretical, or both—using the NeurIPS conference format.

Working in groups

  • You may work solo or in groups of up to three (special permission required to exceed).
  • Effort scales with team size (2× for two authors; 3× for three).
  • Include a contributions paragraph naming each author’s concrete contributions.
  • Paper length is 3 + n pages (where n = group size), excluding references and the contributions paragraph.

Milestones

DateDeliverablePoints
Oct 12Project matching spreadsheet3 points
Oct 311–2 page progress report7 points
Dec 4In-class presentation10 points
Dec 12Final report20 points

Project matching spreadsheet (3 pts)

Each student will add information about topics they find interesting to a project matching spreadsheet. The course staff won’t match students, but the goal is that this process will help students find others with similar interests.


Progress report (7 pts)

Each group will submit a short progress report of 1-2 pages. Describe your project and partial progress. Your writeup should include answers (implicitly or explicitly) to the following simplified version of the Heilmeier Catechism:

  • What are you trying to do?
  • How is it done today, and what are the limitations?
  • What’s new in your idea, and why might it succeed?
  • If successful, what difference will it make?
  • What’s the first thing you will try or have already tried to test the idea? Include any initial progress you’ve made so far.

Final project grading: Course relevance

The final project is graded out of 30 points (10 for the in-class presentation and 20 for the final report). Naturally, the project must be clearly about AI for algorithmic reasoning & optimization. Projects not related to the course will receive 0 of the 30 final project points.

In-scope examples include:

  • Algorithmic reasoning as a lens to understand ML models (aligning with the lectures from 10/7–10/16)
  • ML for graph/constraint/NP-hard problems (aligning with lectures 10/21–10/30)
  • Learning to formalize optimization problems (aligning with lecture 11/6)
  • ML to guide/configure solvers (aligning with lectures 11/11, 11/13)
  • Theoretical guarantees for learned algorithms/heuristics (aligning with lectures 11/18, 11/20)

You will receive feedback about whether your project is in-scope when you turn in your progress report and will have the opportunity to turn in a revision if not.


Final paper: Novelty & insight (10 pts)

Originality (3 pts)

Examples include:

  • New algorithm or heuristic
  • New theoretical result
  • New problem formulation/evaluation/analysis protocol or benchmark
  • New combination of known elements that yields a surprising result
ScoreDescription
3Clear novelty with strong supporting evidence.
2Straight-forward extension of an existing idea.
1Minor variation/ablation; weak distinction from prior work.
0No identifiable novelty.

Depth of analysis (3 pts)

You should include a coherent analysis that explains how and why your idea works or fails. Tools to illustrate depth include:

  • Ablations isolating methodological components
  • Fair baselines/metrics (e.g., both classical and learned methods)
  • Generalization tests (e.g., out-of-distribution instances)
  • Thorough proofs
  • Counterexamples/failure modes (e.g., why a hypothesized theorem doesn’t hold) (Not all depth tools will be relevant to all projects.)
ScoreDescription
3Relevant depth tools executed well.
2Some depth but gaps (e.g., missing ablation or weak baselines).
1Superficial analysis; results reported with little interpretation.
0No analysis beyond raw outcomes.

Potential impact (2 pts)

It should be clear who benefits from your contributions and how. Impact can come in many different forms, including:

  • Methodolgical: E.g., improves solution quality/gap, runtime, etc.
  • Theoretical: E.g., clarifies when/why learned components help or fail.
ScoreDescription
2Clear, well-argued value to a broader community.
1Some value but narrow or under-substantiated.
0Unclear who benefits or how.

Future work (2 pts)

Include specific, realistic next steps.

ScoreDescription
2Specific 2–3 next steps.
1Plausible but high-level ideas with limited detail.
0Vague or no future work.

Final paper: Writing & completeness (10 pts)

Any hallucinated reference will result in 0 out of 10 points for this section. “Hallucinated” includes any citation that cannot be verified or misrepresents bibliographic details (incorrect title, author list, venue, or year). Citing arXiv is fine, but if the paper appeared in a conference/journal, it’s preferable to cite the conference/journal version.

Structure & formatting (2 pts)

Your report should comply with the required structure.

ScoreDescription
2NeurIPS format; within 3 + n pages (references/contributions excluded); clear sectioning; specific contributions paragraph (if applicable).
1Slightly under length; contributions paragraph vague.
0Major non-compliance with formatting instructions.

Problem formulation (2 pts)

It should be clear exactly what problem you solved and under what assumptions.

ScoreDescription
2Precise definitions, assumptions, objectives, and evaluation setup (if applicable).
1Mostly clear but some missing assumptions/ambiguity.
0Unclear scope; it’s not obvious what is solved or proved.

The report should have accurate positioning and proper attribution.

ScoreDescription
2Accurate/sufficient citations; clear where prior methods fall short.
1Coverage OK but thin; shortcomings of prior work unclear.
0Missing or very weak context.

Readability (2 pts)

The writing, notation, and figures should be clear.

ScoreDescription
2Clear narrative; consistent notation; legible, labeled figures/tables referenced in text.
1Understandable but with jargon jumps/undefined symbols or hard-to-read figures.
0Disorganized; illegible figures; math without explanation.

Reproducibility (2 pts)

The reader should be able to re-run your experiments or verify your proofs. Examples include:

  • Empirical: runnable code + configs (public repo or share privately with the course staff’s GitHub handles vitercik and zzyunzhi); data sources/creation, seeds, splits, metrics, hardware, hyperparams; clear how to reproduce each table/figure.
  • Theoretical: complete proofs (appendix allowed); assumptions stated; auxiliary lemmas included or cited clearly (e.g., Lemma 3.5 by Vitercik et al., ‘25).
ScoreDescription
2Readers can reproduce or check results/proofs.
1Partially reproducible (e.g., missing seeds/configs/proof details).
0Not reproducible; key details missing.

Final presentation (10 pts)

Problem & context (3 pts)

It should be clear exactly what problem you solved.

ScoreDescription
3Plain-English problem statement with sufficient context for the audience to follow.
2Mostly clear; minor jargon.
1Heavy jargon in describing the problem.
0Totally unclear what the talk is about or why it belongs here.

Method (3 pts)

Your method should be clearly explained.

ScoreDescription
3Key definitions/assumptions/steps clearly described; notation defined on first use.
2Understandable but skips a few definitions.
1Heavy jargon in describing the method; key steps left implicit.
0Method/evidence largely unexplained.

Takeaways of results (2 pts)

The “so-what” should be clear, as should the potential impact.

ScoreDescription
2Clear “so-what” and interpretation of the results.
1Numbers/theorems with little interpretation.
0No takeaways beyond raw results.

Delivery (2 pts)

Emphasize slide readability, pacing, and timing.

ScoreDescription
2Clean slides (large fonts, consistent notation); one idea per slide; confident delivery (not script-reading); on time (± a few seconds).
1Minor readability/pacing issues.
0Illegible figures/text; script-reading; unable to answer basic questions; major pacing issues.