<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Optimization | Learning And Signal Processing</title><link>https://ucl-lasp.github.io/tag/optimization/</link><atom:link href="https://ucl-lasp.github.io/tag/optimization/index.xml" rel="self" type="application/rss+xml"/><description>Optimization</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 25 Jan 2026 00:00:00 +0000</lastBuildDate><item><title>Agents for Optimization</title><link>https://ucl-lasp.github.io/project/optimization-agents/</link><pubDate>Sun, 25 Jan 2026 00:00:00 +0000</pubDate><guid>https://ucl-lasp.github.io/project/optimization-agents/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Many industrial problems (routing, scheduling, circuit design) are NP-hard combinatorial optimization challenges. We investigate whether learning-based agents can &amp;ldquo;outsmart&amp;rdquo; or accelerate classical solvers.&lt;/p>
&lt;h2 id="active-projects">Active Projects&lt;/h2>
&lt;h3 id="1-neural-combinatorial-optimization">1. Neural Combinatorial Optimization&lt;/h3>
&lt;p>&lt;strong>Goal:&lt;/strong> Learning heuristics from data.
&lt;strong>Details:&lt;/strong> Instead of hand-crafting heuristics for every new problem, we train &lt;strong>RL agents&lt;/strong> to learn construction and improvement heuristics automatically. We focus on graph-based problems where the agent learns to traverse the graph to build a valid solution.&lt;/p>
&lt;h3 id="2-generalizable-solvers">2. Generalizable Solvers&lt;/h3>
&lt;p>&lt;strong>Goal:&lt;/strong> Agents that generalize across problem sizes.
&lt;strong>Details:&lt;/strong> A major limitation of neural solvers is generalization. We are designing architectures (based on GNNs and attention) that allow an agent trained on small graphs (e.g., 20 nodes) to zero-shot generalize to large-scale instances (e.g., 1000 nodes) without retraining.&lt;/p>
&lt;h2 id="works-done">Works Done&lt;/h2></description></item></channel></rss>