- #Ultrasonic sound grid mapping update
- #Ultrasonic sound grid mapping full
- #Ultrasonic sound grid mapping software
- #Ultrasonic sound grid mapping code
They will give you accurate distance regardless of hill climbing or battery power. But the first improvement here is wheel encoders.
#Ultrasonic sound grid mapping full
For example, full power on all motors for 2 seconds equals 2 feet. Stall detection would be a good idea too.įor distance measurements, you could use dead reckoning.
It could be just big bumpers front and back with touch sensors. Like what if the bot gets stuck behind a 1" high wall? The ultrasound may not be needed, but your mechanical/sensor design needs robust object detection. Ultrasound is only good for object detection or avoidance. At least you could map a room with good floors (no slippage). Object detection with wheel encoders seem like the place to start. Good luck, and be sure to let us know what you come up with! Be sure to look up information applicable to game programming as well (robotics is one of those topics that encompasses nearly every level of computer and engineering science that exists - a great way to solve problems in robotics is to look at solutions in other areas of industries and technologies that are involved in the same areas). Plenty of information on all of this exists on the internet, though. Instead, I a merely positing ideas and information which you seem to not be aware of, or maybe haven't had time to think about yet. I am not trying to dissuade you from the project. The software, though, will need as much information as you can extract from the environment using as many sensors as you can mount - so it should be planned carefully.
#Ultrasonic sound grid mapping software
Indeed, a robot such as you're planning to build is going to be anything but simple in fact, the hardware, electronics, etc will be the simplest piece - its the software that will be the challenging part.
#Ultrasonic sound grid mapping update
Then there's path planning, dealing with moving objects (and knowing when those aren't a part of the map), line segment removal and update (so when a chair or other object changes position, it can be intelligently routed around).
#Ultrasonic sound grid mapping code
Finally, you would need code to locate the robot "avatar", as represented in the program, within the internal map of line segments. You would also need to have a method to know when one line segment from one read is actually an extension of another line segment from a previous read (so you can combine the line segments to reduce memory consumption). The closest you might be able to get on an Arduino (and you would likely need a Mega, at least) would be to map the house as-you-go using an internal array of line segments the biggest issue will be converting the information from the robot's interaction with the environment (distances travelled, ultrasonic sensors, IR sensors, touch sensors, etc) into those line segments - not an easy task by far. Start looking into small form-factor PCs (Mini-ITX and smaller). Unfortunately, any form of real SLAM won't be implementable on an Arduino, likely (not enough memory). You are going to want to look into SLAM (Simultaneous Localization and Mapping) techniques for robots, if you want this to seriously work. Grid-based navigation isn't going to cut in the cluttered environment of a house (unless everything in that house is arranged on a perfect grid and never moves). Maybe you could try it out! Technically with 4 sensors, we should be able to do a phase array sensor in two dimensions instead of the single dimension in the above article.īut hey! Thinking how something might work, is vastly different from seeing it work.You should implement encoders on the wheels/gearboxes/motors - so you can accurately track the motion of your robot. Unfortunately this requires hacking into the internals of the sensor and heavily depends on the use of an oscilloscope which I currently do not have. See this other Hackster project for more info: Spread Spectrum Phased Array Sonar. So the way to fix this is to use beam forming using a phase difference between the sensors. Imagining something working and actually seeing it working are two vastly different things. It was an incredible learning experience for me to attempt this practically. Thanks to Kirti, I finally understood it!
This is the reason why it simply appears like a circle. Hence in our plot above - we don't actually see the corners of the room. You see, the beam sent from the sensor gets quite wide as the distance increases, so even if the sensor is trying to read the distance to the corner of the wall, it is actually receiving the fastest echo from the wall closest to it!