Research

Exploring the intersection of what interests me and what is going to impact the world the most. Started as a self-learner in Math, Physics, Philosophy, and English, now exploring the next frontier of AI, HCI, and Compute.

Research Interests

Reinforcement Learning

Exploring how experiential learning algorithms can reshape decision-making systems across industries, from autonomous agents to policy optimization. Investigating the transformative potential of RL in creating adaptive systems that learn from interaction and drive meaningful global impact.

Game Theory

Advancing probabilistic reasoning and strategic decision-making frameworks for multi-agent environments. Developing mathematical models that capture uncertainty, cooperation, and competition to solve complex real-world coordination problems.

Human Computer Interaction

Reimagining the symbiosis between humans and AI systems through intuitive interfaces and ethical design principles. Building technology that amplifies human capability while preserving agency, creativity, and meaningful connection in our digital future.

AI

Bridging the gap between large language models and neurosymbolic reasoning to unlock the next frontier of artificial intelligence. Researching architectures that combine statistical learning with symbolic reasoning for more robust, interpretable, and generalizable AI systems.

Publications

Ongoing and completed work.

2025

3 papers

A Spatially-Aware Search Engine for Textual Content in Images

Harvard SEAS
May 2025
Complete
Authors: Pranav Ramesh, Mohamed Zidan Cassim, Giovanni D'Antonio
Published in: Harvard School of Engineering and Applied Sciences
Abstract

Standard image search engines often treat text within images as secondary metadata or ignore its spatial location. This limits users' ability to find images based on text appearing in specific visual areas. We present a spatially-aware textual image search engine designed to address this limitation. Our approach utilizes an inverted index mapping text n-grams to their normalized bounding box coordinates within images. Queries consist of text and an optional spatial region. Relevance scoring combines spatial and textual factors using cosine similarity with n-gram length, weighted according to configurable parameters. To facilitate development and evaluation, we developed a pipeline for generating synthetic datasets with controlled text placement and ground truth.

Keywords
Image SearchText LocalizationSpatial SearchN-gramsOCRInformation RetrievalComputer VisionDocument Analysis
Course Project

Frontier AI Evaluation Framework

In Progress
June 2025
In Progress
Authors: Giovanni M. D'Antonio
Published in: In Progress
Abstract

Currently building comprehensive evaluation methodologies for frontier AI systems, focusing on safety, alignment, and capability assessment across diverse domains.

Keywords
AI SafetyFrontier ModelsEvaluationAlignmentAI Assessment
Research Project

Efficient Compute Optimization for Large-Scale AI

In Progress
August 2025
In Progress
Authors: Giovanni M. D'Antonio
Published in: In Progress
Abstract

Currently building novel approaches to computational efficiency in large-scale AI training and inference, exploring hardware-software co-design principles.

Keywords
Efficient ComputingAI OptimizationHardware-Software Co-designLarge-Scale Training
Research Project

2024

5 papers

Studying Game Theory Optimal Solution through Reinforcement Learning

Harvard SEAS 2024
May 2024
Complete
Authors: Giovanni M. D'Antonio
Published in: Harvard University School of Engineering and Applied Sciences - CS 2400 (Neuroscience)
Abstract

In this project, we explore how independent Q-learning agents learn in preset environments. More precisely, we define increasingly complex game theoretic systems to look at possible shortcomings and interactions these agents have in finding optimal solutions. We analyze behaviors in more and more complex games, exploring the behavior of reinforcement learning agents in environments with clear optimal strategies, noticing how such agents do indeed find optimal solutions when those are clear, deterministic, and within a limited number of agents' interactions. The architecture of independent Q-Learning agents proves incapable of dealing with games where optimal choices change constantly or when the number of agents is very high, but is a perfect fit for environments with limited agents' interactions and static optimal strategies.

Keywords
Game TheoryReinforcement LearningQ-LearningNash EquilibriumMulti-Agent SystemsDecision MakingOptimal StrategiesArtificial Intelligence
Course Project

Incorporating Unspent Funds in Participatory Budgeting

Harvard SEAS 2024
December 2024
Complete
Authors: Eric Tang, Nicholas Lopez, Michael Y. Zhao, Giovanni M. D'Antonio
Published in: Harvard University School of Engineering and Applied Sciences
Abstract

We explore extensions to two commonly applied models of participatory budgeting, knapsack voting and the method of equal shares. In each model, we consider the possibility that voters can derive value from funds being allocated outside of a particular set of projects. Modifications are made to both the integral and partial knapsack models that allow voters to fully express their desired allocation of the budget, including the preference to save instead of spend. The extensions of both knapsack models preserve the equality and fairness properties of the standard counterparts, but increase the time and space complexity of methods to solve for optimal allocations. We further test out this method on real-world participatory budgeting data from Poland and the United States, finding that incorporating a preference for saving can lead to higher utility outcomes, especially when there is significant diversity in voter preferences.

Keywords
Participatory BudgetingKnapsack VotingMethod of Equal SharesDemocratic GovernanceAlgorithmic Decision MakingSocial Choice TheoryBudget Allocation
Research Paper

Hierarchical Distributed Low-Communication (HeDiLoCo) Training

Harvard SEAS 2024
December 2024
Complete
Authors: Romeo Dean, Giovanni M. D'Antonio
Published in: Harvard John A. Paulson School of Engineering and Applied Sciences
Abstract

Frontier AI models are growing to multiple trillions of parameters, pushing companies to invest billions into new datacenter construction. As AI companies strive to keep up with scaling trends, they are expected to push well past the capacity of any single datacenter. We present HeDiLoCo, a framework inspired by DiLoCo that extends asynchronous training with flexibility for hierarchical topologies. Workers in the same campus can synchronize frequently at low latency, while synchronization across distant regions occurs less frequently, efficiently balancing communication overhead with model convergence speed. Our experiments with a 100K parameter Transformer demonstrate that HeDiLoCo can speed up training times by 100x relative to synchronous baseline while only incurring a 1-2% final validation loss penalty.

Keywords
Distributed TrainingMachine LearningHeDiLoCoMulti-DatacenterAsynchronous TrainingAI Scaling
Research Report

SENNs and Adversarial Robustness

CS 2822r 2024
December 2024
Complete
Authors: Valerio Pepe, Giovanni D'Antonio, Martin Dimitrov
Published in: Harvard University - CS 2822r Final Project
Abstract

Self-Explaining Neural Networks (SENNs) promise interpretable, intrinsically explainable modeling by decomposing predictive processes into understandable concepts and relevance weights. In this paper, we present a systematic investigation into the adversarial robustness of SENNs under the Fast Gradient Sign Method (FGSM). We find that while SENNs can withstand minor, unstructured noise with minimal performance degradation, more targeted "chunked" perturbations severely compromise the model's accuracy. Our analyses reveal distinct vulnerability profiles: the aggregator exhibits non-monotonic accuracy patterns, while the conceptizer and parameterizer display more predictable yet ultimately devastating accuracy declines due to vulnerabilities to concept-based noise, which we term the viable alternative hypothesis.

Keywords
Self-Explaining Neural NetworksAdversarial RobustnessFGSMMachine LearningInterpretabilityNeural Networks
Course Project

Desserts in Deserts: Quantifying the Hidden Costs of Processed Foods

Citadel 2024
August 2024
Complete
Authors: Giovanni M. D'Antonio, Abhay Srivastava, Ethan C. Tan, Fucheng Warren Zhu
Published in: Citadel 2024 Summer Invitational
Abstract

Our research examines the relationships between processed foods, food insecurity, health outcomes, socioeconomic factors, and race across the United States. Through exploratory data analysis and iterative modeling, we uncovered significant associations between processed foods and food environments, racial demographics, geographic locations, and health indicators like obesity and diabetes rates. At the county level, our models revealed that food deserts are more prevalent in areas with higher concentrations of racial and ethnic minorities. The impact of processed foods extends far beyond immediate health concerns, perpetuating a cycle of deepening inequality with long-term consequences for vulnerable populations.

Keywords
Food InsecurityProcessed FoodsHealth OutcomesSocioeconomic FactorsRacial DemographicsFood DesertsObesityGeographic AnalysisStatistical Modeling
Competition Report

2023

1 paper

Modern Facial Recognition

Harvard SEAS 2023
May 2023
Complete
Authors: Giovanni M. D'Antonio
Published in: Harvard University School of Engineering and Applied Sciences - Math 22b
Abstract

In this paper, we explore the connections between the material in Math 22b and the field of optimization through our main topic of gradient descent. This is achieved by connecting it to one of its most common applications, use of Convolutional Neural Networks (CNNs) in order to best them in recognizing images as it is now considered the state of the art given the relative computational simplicity of the algorithm as compared to other optimization methods. We investigate the roles multivariable calculus and other tools from analysis play in finding how to optimize functions through gradient descent, hoping to shed light on the mathematical foundations that support these powerful tools. We mainly investigate guaranteed convergence within some ε ∈ ℝ+ of the function's global optimum, along with a discussion of why it might not always be possible for it to converge with any function.

Keywords
Facial RecognitionConvolutional Neural NetworksGradient DescentOptimizationComputer VisionMultivariable CalculusMachine LearningImage Processing
Course Project