Diferencia entre revisiones de «Robotic Vision: Technologies for Machine Learning and Vision Applications»
De Grupo de Inteligencia Computacional (GIC)
Sin resumen de edición |
Sin resumen de edición |
||
Línea 9: | Línea 9: | ||
: Platform support: This software has been developed in the .Net platform. | : Platform support: This software has been developed in the .Net platform. | ||
: Please refer to this paper: Ramon Moreno, Manuel Graña and Kurosh Madani | : Please refer to this paper: Ramon Moreno, Manuel Graña and Kurosh Madani, A robust color Watershed transformation and image segmentation defined on RGB Spherical Coordinates, Robotic Vision: Technologies for Machine Learning and Vision Applications, pp112-128 doi:10.4018/978-1-4666-2672-0.ch007 ISBN: 978-1-4666-2734-5 | ||
A robust color Watershed transformation and image segmentation defined on RGB Spherical Coordinates | |||
Robotic Vision: Technologies for Machine Learning and Vision Applications, pp112-128 doi:10.4018/978-1-4666-2672-0.ch007 ISBN: 978-1-4666-2734-5 |
Revisión actual - 01:39 20 mar 2014
- In this page we make publicly available a demonstration of a watershed Image Segmentation method based on a hybrid gradient.
- The method has 2 parameters (a,b) that specify the mixture model controlling the of activation of each component of the hybrid gradient. Besides, the method allows the specification of a decision threshold controlling the sensitivity of the method the region mergin.
- For a fast test of the parameters (a,b, threshold). We shared this light application for Windows platforms (download)
- The Sources. This class in C# and contains some methods for image segmentation based on the spheric approach.
- For any question, fell free to ask me.
- Platform support: This software has been developed in the .Net platform.
- Please refer to this paper: Ramon Moreno, Manuel Graña and Kurosh Madani, A robust color Watershed transformation and image segmentation defined on RGB Spherical Coordinates, Robotic Vision: Technologies for Machine Learning and Vision Applications, pp112-128 doi:10.4018/978-1-4666-2672-0.ch007 ISBN: 978-1-4666-2734-5