$\color{#373530}\rule{360px}{3px}$

⚖️ Exploring Optimal Taxation with Deep Reinforcement Learning

Reinforcement Learning, Economics Dec 2022

Expanded on the "AI Economist" framework by simulating complex environments with 16 agents and incorporating bounded rationality to test policy robustness. Evaluated alternative model-free, on-policy algorithms (A3C, PG) and offline reinforcement learning (MARWIL) against the PPO baseline to determine efficacy in dynamic economic modeling. [Link]

Developed using Python, RLlib, and the AI Economist Gym Environment

images (1).jpeg

Exploring Optimal Taxation with Deep Reinforcement Learning

$\color{#373530}\rule{360px}{2px}$

💻 Approximating Non-Convex Problems /w Gurobi

Optimization Apr 2022 - May 2022

Calculated as close-to-optimal solution to a NP problem with a non-linear objective. I had to strategically relax constraints of the original problem to turn it into a linear objective and constraints. Achieved #12 ranking out of over 230 submissions [Link]

Developed using Python and Gurobi Optimizer

Gurobi_Logo.png

Approximating Non-Convex Problems /w Gurobi

$\color{#373530}\rule{360px}{2px}$

🍎 How Much Do You Need to See to Succeed?

Reinforcement Learning, Computer Vision May 2021

Optimized a PPO agent in the OpenAI Procgen FruitBot environment by reducing observation space and model complexity. Achieved baseline performance while removing 53% of visual input via horizontal slicing and saliency-based masking. Streamlined the IMPALA CNN architecture by removing a full convolutional sequence, proving that reduced-parameter models maintain strong generalization on unseen levels. [Link]

Developed using Gymnasium, and PPO2

Screen Shot 2026-02-22 at 11.03.49 PM.png

How Much Do You Need to See to Succeed?

$\color{#373530}\rule{360px}{2px}$