Towards the end of my schooling I started working with robot arms and CNCs for some of my graduate classes. Then, once I worked in manufacturing, I was surrounded by industrial automation. Obviously automation is the future but I have always been interested in what would be the best way to apply it. On the basis of time saved, cost, and health, cooking at home is a major one. Many people cook at home because it is healthier and cheaper then eating out. Focusing on cost, food is the 2nd most expensive thing in Americans monthly budget, around $800, for me its closer to $1000.
For these reason, I've been fascinated withe the possibilities surrounding automating cooking (not food production, at least not yet) in your home. I believe it could be possible to yield great time savings for individuals and reduce the need to spend extra money when going out.
Another avenue would be to create an automated kitchen or restaurant that is open 24/7 and offers healthy, fresh, good tasting, options. This would be a healthy fast food option, for people that work night shifts, like meal prepping.
All those ideas aside, for me the goal has been to save the 3-4 hours of cooking that I have to do every week just for dinner alone. I do not always enjoy repetitive tasks such as washing the dishes, so the idea of building this system in my own house could serve as a testing ground and could solve some of my problems while I am at it.
The quick math, 4 hours for 52 weeks is 208 hours a year. At a hour rate of $28 (avg of what Americans are paid by hour) that's almost $6000 that could be earned extra or more time spend doing anything else. This does not account for the potential for the system to do dishes and keep the kitchen clean.
For the time being I have found a more efficient system of cooking and cleaning so I have paused development of this idea. But I would love to return to this idea one day.
Automated beverage dispenser (saw it functioning at Automate 2024 Conference)
Automated work station from Hyphen
My initial approach was to use a robot arm with a vision system. The arm and camera would have to be able to map the environment and ideally assign locations to each dish, food item, and cookware based on what is in the current locations. Or assign positions based on what is in its inventory. Before taking on the inventory system I wanted just get a familiar with programing again and be able to control the arm remotely.
In doing this I used a 3D printed design from How to Mechatronics for the robot arm and used the same servo motors. For the electronic control side I wired up an Arduino mega with the 6 servo motors and added a IR signal receiver to it. The remote I used stated that it used the NES protocol, but I never got any of the libraries for reading IR remote protocols to work with the remotes. I believe my receiver was not working right because I tried multiple remotes, but even when I added a capacitor to provide steady voltage it still did not work. I ended up using an oscilloscope to decode the signal and wrote my own script to determine which button was pressed. Once this was solved getting the arm to move was easy, its choppy but that should be solvable.
Completed arm and controller
Remote
Basic schematic from How to Mechatronics
Actual Electronics
I have very little experience with camera vision systems, which makes it all the more fun to learn about. NVIDIA has a whole library of programming projects using the Jetson Nano. Many of them are free and one of these is a class on teaching a program what to look for by giving it some training data and basic parameters. For instance in my application I could give it 3 scenarios, 1, 2, or 3 carrots in an image. I will then provide it with 20-40 images per scenario both good and bad and that is enough to train the neural network. Overall I got this system to work but it is not reliable.
The major issue with this approach is that it works also based on the back ground of the images. So the lighting has to be just right, and the back ground should be similar to the training data. All of these conditions will vary in a kitchen.
This was a great exercise to learn from but I think for an autonomous system, I need to find a more robust method of identifying objects in view of the camera. For me this is definitely the most challenging part of this idea.
Jetson nano and camera
Thumbs up/down recognition from NVIDIAs course
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.