+49 6221 672 19-00 info@hdvisionsystems.com

9 Challenges You Can Expect in Bin Picking

Still, many companies use manual labor for repetitive tasks that require cognitive skills. To meet the demands of manufacturing, logistics or online retail, better imitation of human-like movements is needed. Robots should work smarter, faster and more flexibly.

On the surface, bin picking simply means detect, pick and place. However, slight distortions occur in each of these phases, which overall lead to high inaccuracy in the handling object. A 3D vision system reduces this many times over. Such a system represents reality as it actually is.

Despite high accuracy, the industry faces difficulties when it comes to selecting the right vision technology. But what are the challenges that the industry has to address?

 

1. Precision of 3D points

Precision of vision systems is an essential component of bin picking applications. Companies expect a robot to pick accurately with its object recognition. Here, the “recognition” of the automated system is critical. Higher precision implies higher safety, as this allows the robot to grasp the objects to be picked with all their details. The precision of the 3D system enables the robot to assess exactly where and in which position an object to be removed is located.

 

2. Accuracy of the represented reality

Accuracy and precision are closely related, but do not refer to the same thing. Accuracy indicates whether the robot’s calculated spatial coordinates match reality. Precision, on the other hand, indicates how accurately the objects to be grasped are represented in the point cloud. Precision is considered to be responsible for the “detecting” part and accuracy for the “picking” part. An interaction of precision and accuracy is necessary to grab the handling objects at the right place and to remove them safely.

 

3. Speed of grasping

The goal of developers is for robots to compete with human capacity. With increasing production, warehouse logistics and order processing, the speed of a picking operation is one of the decisive factors. It determines whether an automated bin picking system is worth the investment or not. A vision system should support at least 600 picks/hour to compete with current solutions. There is no advantage to high data quality if the recommended number of picks/hour is not achieved.

 

4. Color dimensions

Today, many gripping systems do not use color dimension for more accurate gripping because most 3D vision systems do not provide this information. To obtain this information, some vendors add an additional 2D color camera to their solution.

This often creates more problems than it solves. For example, in calibration: using different lenses introduces another dimension, adding value to color. This is also true for robots. Color adds more truth to the recreated reality, which reduces uncertainty when selecting points. However, the speed of acquisition also suffers from the additional data. In addition, the new dimension of the additional camera affects precision and accuracy.

When does a company use an additional camera with color recognition?

  • Keeping errors in precision and accuracy as low as possible.
  • Company accepts an increase in cycle time.

 

5. High Dynamic Range (HDR)

As companies move toward automated bin-picking systems, they need vision systems with a high dynamic range (HDR). With an HDR, the gradations from dark to light are much better represented than with a Standard Dynamic Range (SDR) image. This results in very dark and at the same time very bright points being depicted realistically. The use of an HDR thus makes it possible to grab very bright and very dark objects from a box at the same time. This also makes it possible to grab in a wide dynamic range. Without HDR, this is not possible by means of an image, since the underlying spectrum would otherwise be too large.

 

6. Reflective material

Metallic and shiny objects still pose a major challenge for picking. The reflections caused by shiny objects cause errors in object localization. This creates ghost points (“image noise”) in the air, which the software perceives as objects. It bypasses these points during motion planning. For example, reflections can be avoided by using filters. Such a filter solves the problem of detection with metallic, shiny and reflective parts.

 

7. Form factor

A large baseline of the vision system not only leads to occlusions, but also limits the use of the vision system in different scenarios. For example, when a robot is assembling parts, a large baseline not only leads to errors in the data. It is also important to note that the vision system does not collide with the robot’s motion. Special care must be taken here to ensure that the object being gripped does not collide with its surroundings.

 

8. System stability

The expansion of robotics in all industries has introduced operating conditions that are harsh and degrade the quality, over time, of the 3D data provided by the vision system. This is one of the biggest challenges for automated gripping tasks. Environmental conditions strongly influence vision systems. A 3D system that adapts to the external conditions and still provides long-term process-stable picking tasks is the need of the hour. Vision systems should always be industrial-grade. They should be protected against contact, ingress of foreign bodies and water, and have a certain impact resistance.

 

9. Stable lighting conditions at the point of use

These are necessary to obtain permanently constant data for an identical setting. Complete illumination of the situation is also necessary to avoid shadows. This is the only way to obtain precise and correct data about the handling objects.

 

These challenges of machine vision are not insurmountable. They do limit the use of automated picking systems, but more and more companies are rising to these challenges and finding effective ways to better and better mimic human motion.

Object handling for your gripping task

Ready to overcome these challenges? Find out which object handling is best suited for your automation task.

Which object handling is best for your application?

Which object handling is best for your application?

Which object handling is best for your application?

Which object handling is best for your application?

  • Start
  • Workpieces
  • Working Area
  • System Connection
  • Contact

Start

Which Object Handling Fits Your Task Best?

Our configurator guides you to your perfect solution in 5 easy steps.

Start your system configuration now!

Parts to Pick

How large are the workpieces to be gripped (in mm)?

What is the surface of the workpieces to be gripped?

What is the shape of the workpieces to be gripped?

Alternatively: Upload one or more photos and/or CAD models of your workpieces here:

Max. size: 256.0 MB

Working Area

How far is the desired working distance between sensor and workpieces?

If you have specified a different working distance, what is it?

How should the workpieces be gripped?

Are there any special features of your task that need to be taken into account? If so, here is space for your comments:

System Connection

Which interface configuration should the system use?

What type of robot should the system work with?

Time Frame

When should the project start at the earliest?

By when should the project be completed?

Contact Details

Name

E-Mail-Address

Phone Number

Company

Data Protection

11. February 2021

Continue reading:

Share This