I am a Ph.D. Candidate at AImageLab at the University of Modena and Reggio Emilia, Italy.
My research activities are focused on Computer Vision and Deep Learning applied to Collaborative Robotic tasks, more precisely in topics like 3D Object Reconstruction and Human/Robot Pose Estimation. I work under the supervision of Prof. Roberto Vezzani.
Email  |  CV  |  Google Scholar  |  Github  |  LinkedIn
In the first part of my Ph.D. I tackled the task of 3D Object Reconstruction. More recently, I've started working on robotic tasks, involving Pose Estimation and Grasping. Representative papers are highlighted.
Thanks to a novel 3D pose representation composed of two decoupled heatmaps, efficient deep networks from the 2D HPE domain can be adapted to accurately compute 3D joints locations in world coordinates. Moreover, depth maps are used to bridge the gap between synthetic and real data.
A multi-category mesh reconstruction framework infers the textured mesh of objects, learning category-specific priors in an unsupervised manner and obtaining smooth shapes with a dynamic mesh subdivision approach.
A multi-task framework combines visual features and keypoint localization features in order to improve car model classification accuracy.
A two-stage approach in which interpretable information are exploited by a novel view synthesis architecture in order to reproduce the future visual appearance of vehicles in an urban scene.
An unsupervised approach used to train a Transformer-based architecture that learns to detect dynamic hand gestures in a continuous temporal sequence.
A Transformer-based architecture and a Finite State Machine (FSM) are able to detect and classify a gesture. One of the proposals in the SHREC2021 contest.
Given a large annotated pig dataset, long-term pig behavior analysis is possible, even though estimates from individual frames can be noisy.
A Transformer-based architecture that is able to recognize dynamic hand gestures exploiting information from a single active depth sensor (depth maps and surface normals).
A multimodal combination of CNNs whose input is represented by RGB, depth and infrared images, achieving a good level of light invariance, a key element in vision-based in-car systems.
IEEE International Conference on Pattern Recognition (ICPR)
IEEE Robotics and Automation Letters (RA-L)
Towards a Complete Analysis of People: From Face and Body to Clothes (T-CAP)
Advanced Course on Data Science and Machine Learning - ACDL 2021, Certosa di Pontignano (SI), Italy (certificate)