Hey there, I'm Michele Ciciolla, born and bred in Rome with a lifelong passion for technology. After finishing up high school, I jumped straight into studying Computer Science Engineering at Università Roma Tre. That's where I got my hands dirty with algorithms, data structures, and automation, all the good stuff. During my undergrad, I got hooked on mobile robotics navigation, and that interest led me to focus my bachelor's thesis on the topic. It was such a blast that I decided to keep the momentum going and signed up for a master's program in Artificial Intelligence and Robotics at Università La Sapienza. There, I got into some seriously cool stuff like robot kinematics, motion control, computer vision, and deep learning.
I'm all about diving into research and development, especially when it comes to software engineering for robotics, artificial intelligence, automotive tech, and even the space/defense sector. Let's make some waves in tech!
Leica Geosystems AG is a Swiss company of the Swedish Hexagon AB group. It produces measuring instruments for geodesy and building construction, for photogrammetry. It is based in Heerbrugg on the border to Austria. My division is Machine Control and i look after MC1 Software with focus on paving functionalities. For such purposes i use C++ and Qt.
Pixies UrbanLab is a mobile robotics startup proposing a solution to the outdoor cleaning. Our product is a wheeled mobile robot geared with Artificial Intelligence to identify littering along the path. My role at Pixies UrbanLab is to maintain and develop new algorithms for the navigation and perception stacks using C++, ROS and Python.
During this master i've studied and developed several projects about both robot kinematics/motion
control as well as deep learning and computer vision.
In this master course i believe these were the
most relevant exams taken:
Python, MATLAB and Tensorflow were the most used tools in these context.
During the bachelor course i've approached to computer science learning about programming theory,
telecommunications and automatic.
The fundamental exams taken during this period were:
My final thesis project was about developing a navigation algorithm for a real mobile robot (Turtlebot) using Python and ROS.
This summer school offered lectures and hands-on tutorials to grasp the fundamental concepts about
mobile robotics, ROS and C++ programming. The tutorials and hands-on sessions took place place
at a dedicated education and training
center for search and rescue.
I've got the opportunity to program a (semi)-autonomous
rough-terrain UGV, to extensively test it in a variety of environments, and finally to challenge with
my colleagues in a Search and Rescue competition.
This mini-master course offered by Experis Academy (Manpower Group) aimed to give practical knowledge
about collaborative robots (cobot) in the context of lean manufactoring. A general overview on
Industry 4.0 and Project Management were also given in order to train a complete professional
figure in the field of the robotics transformation.
At the end i've joined with my collegues to
several case-study projects where i've applied knowledge and skills gained along
the course. A video about our solution provided to the Fives
Group challange using Universal Robot machines may be seen here on Youtube.
Download
brochure
In this program, i've got a hands-on Python experience and knowledge about build and train neural networks using TensorFlow, improve network’s performance using Transfer Learning and parameters tuning, teach machines to understand, analyze, and respond to human speech with natural language processing systems and process text, represent sentences as vectors, and train a model to create original phrases.
The first task i've been ask to take care was to build a reliable simulation to test our algorithm on. Since we did not have a real robot to work on, this task was fundamental for the whole team. I've started following tutorials on Gazebo and ROS best pratictes online and i've manage to deliver a reliable simulation of our future robot.
View Project on Github
To deliver a prototype as fast as possible to the market we could not write a slam algorithm from scratch much less a planning controller. For that reason my task has been to scout SLAM suites on the opensource network in ROS. I've found several interesting solutions for us, but at the end RTABMAP demonstrated to be the most accurate. Using this algorithm we manage to build accurate 3D maps of the environment aswell as providing a good localization in medium-wide areas. Planner choice has been quite straight forward since TEB is considered one of the best planner around in ROS.
View Project on Github
I built from scratch a computer vision pipeline being able to compute the position of an object in robot coordinates and plan a trajectory to pick it with the brushes. The input of this pipeline is the bounding box information provided by the our custom YOLO network and those bounding box information are used to compute the pixel coordinates of the object in the frontal camera image. Then it comes a pointcloud filtering and segmentation extracting the 3D location of the trash according to the robot center. The final step is to command the planner to send the robot to that location.
View Project on Github
For safety reason we had to build some controller checking that the planned trajectory is safe and for such reason i've been ask to think about a solution. Since in the previous project i've worked a bit with pointclouds, my solution came quite easy: this controller tryes to segment a ground plane of points from the pointcloud and send alarm once it cannot.
View Project on Github
ROS1 is not ideal and sometimes it fails to update and send topics data along the wire. That's why we had to understand when this happens: this topic and node manager checks in a loop every topic you want to track and report an alarm for each of them not responding for a certain amount of time.
View Project on Github
Quality inspection tasks require the camera to be calibrated. In the industrial robotics context the process of camera calibrations is usually performed manually by a human operator, so the goal of this thesis was to develop a automatic way to perform the calibration using an ABB robot, a camera and a machine vision software (HexSight). The results obtained with this process demonstrated that automatic camera calibration can achieve sub-milimetric accuracy aswell.
View Project on Github
Here we simulate a mobile robot simultaneously mapping landmark positions and its trajectory while sensing them. The robot has a range sensor which can obtain the distance between itself and a landmark. In this project correspondences are taken as granted, so i focused more on the optimization algorithm.
View Project on Github
The aim of this project was to implement a hybrid visual and force controller for the DaVinci robot in a simulated environment using MATLAB and VREP-CoppeliaSim.
View Project on Github
Autonomous driving is the future of private and commercial tranport and the paper implemented in this project shows a potential distributed control of several vehicles one behind the other. The first vehicle is the leader of the platoon and it chooses the velocity the group has to maintain. The vehicles behind him follow it keeping a custom safety distance each other. Tests performed successfully even in presence of communication delays and suddenly brakes of the leader.
View Project on Github
A convolutional neural network was trained here to classify positive and negative COVID patients based on their X-Ray image analysis. For this project me and my collegues we have chosen to use transfer learning to obtain a better performance in classification. We've trained the last layers of a ResNet101 on a small labeled dataset we've found in the web managing to obtain a 70% classification accuracy.
View Project on Github