About Me
Hello! I'm a passionate robotics and high-performance computing graduate exploring the frontier between intelligent algorithms, real-time systems, and scalable design.
I'm currently finishing my Master's in Applied Science at Queen's University. My journey includes applying data analysis tools like Excel, programming design and testing using ROS and ROS2 for robotics, and experience working with data center-sized systems.
With a background in electrical engineering and hands-on experience in Python, C, and C++, I aim to combine technical depth and creative problem-solving to develop high-impact solutions in robotics and high-performance computing.
Affiliations
Education & Experience
My Educational Journey and Work Experience
Donald A. Wilson Secondary School
2014 - 2018
Queen's University
2018 - 2023
Bachelor of Applied Science in Electrical Engineering, 2023
Ontario Power Generation
2021-2022
Professional Student Engineering Intern
Queen's University
2023 - Present
Master's of Applied Science in Electrical Engineering, March 2026 Projected
Skills
My Preferred Technologies and Tools
Python
C
C++
ROS & ROS2
Inkscape
Camtasia
Research & Publications
Work that I've published or contributed to.
SHARP: Supercomputing for High-speed Avoidance and Reactive Planning in Robots
TBD
This paper presents SHARP (Supercomputing for High-speed Avoidance and Reactive Planning), a proof-of- concept study demonstrating how high-performance computing (HPC) can enable millisecond-scale responsiveness in robotic control. While modern robots face increasing demands for reactivity in human-robot shared workspaces, onboard processors are constrained by size, power, and cost. Offloading to HPC offers massive parallelism for trajectory planning, but its feasibility for real-time robotics remains uncertain due to network latency and jitter. We evaluate SHARP in a stress-test scenario where a 7-DOF manipulator must dodge high-speed foam projectiles. Using a hash-distributed multi-goal A* search implemented with MPI on both local and remote HPC clusters, the system achieves mean planning latencies of 22.9 ms (local) and 30.0 ms (remote, 300 km away), with avoidance success rates of 84% and 88%, respectively. These results show that when round-trip latency remains within the tens-of-milliseconds regime, HPC-side computation is no longer the bottleneck, enabling avoidance well below human reaction times. The SHARP results motivate hybrid control architectures: low-level reflexes remain onboard for safety, while bursty, high-throughput planning tasks are offloaded to HPC for scalability. By reporting per-stage timing and success rates, this study provides a reproducible template for assessing real-time feasibility of HPC-driven robotics. Collectively, SHARP reframes HPC offloading as a viable pathway toward dependable, reactive robots in dynamic environments.
Read PublicationScaling Parallel Graph Computing for Real-time Robots
TBD
Robots operating in human-robot shared environments often struggle to react within safe time bounds due to the limitations of onboard and consumer-grade computing. Offloading planning tasks to supercomputers can deliver millisecond-level responsiveness, as these systems excel at parallel graph-search algorithms, such as trajectory generation. As modern AI workloads increasingly shape data center design, understanding how robotic applications interact with large-scale compute infrastructures is becoming equally critical. This paper presents a systematic scaling analysis of a representative robotics planning algorithm, motivated by the emerging need for robots to leverage data center-scale compute resources. To study this interaction, we implement a hash-distributed multi-goal graph search that computes collision-avoidance trajectories. We evaluate its performance under stress, examining both strong and weak scaling across a range of graph sizes and process counts. Strong-scaling tests on graphs show a consistent performance peak at 64 cores, while weak-scaling experiments from 32 cores to 128 cores reveal that 32 cores provide the most reliable configuration for real-time responsiveness. The weak-scaling response demonstrates strong linearity, so we also propose a sample predictive equation that estimates the required core count for a given problem size and completion time. With 32-core configurations performing well for small graphs, we additionally simulate multi-robot servicing on a single node. These tests demonstrate that race conditions between CPUs cause severe slowdowns, making this setup impractical. Collectively, these findings suggest that future data centers designed to support real-time robotics workloads should allocate approximately 1 32-core CPU per robot client for a small graph and 2 32-core CPUs per robot client for a large graph, rather than sharing CPUs across multiple robots. Will Be Released Soon.
Get in Touch
Feel free to reach out for collaborations or just to say hi.