• Nem Talált Eredményt

Application of the Results

The methods presented in my dissertation can be applied as an intermediate step in several different assignments of pattern recognition, object detection, and high-level im-age understanding. The segmentation algorithm has already been applied successfully for two real-life scenarios.

5.3 Application of the Results

The first is the detection of crosswalks [98] that is a key task within the Bionic Eyeglass Project [100], an initiative that aims at giving personal assistance to the visu-ally impaired in everyday tasks. The heart of this system is a portable device that, by analyzing multimodal information (such as visual data, GPS coordinates and environ-mental noises), is able to identify and recognize several interesting objects and patterns (e.g. clothes and their colors, pictograms, banknotes or bus numbers) and situations (e.g. incoming bus at the bus stop, or environments such as home/street/office). The information provided by this sensorial input can be injected into my algorithm in the form of merging rules, among others. For example, if the system can localize the po-sition of the user via GPS coordinates, the rule set of the segmenter can be extended dynamically to compose objects (such as lamp posts or signs) with particular (color or shape) properties. Detection robustness can be enhanced this way, as many false positives are filtered out. As of now, smartphones offer not only a remarkable arsenal of sensors, but state of the art devices are also equipped with mobile GPUs that can be utilized by my parallel framework.

In the second case, my system was used in an early prototype of the Digital Holographic Microscope Project [101] that aims at developing an environment (hardware-software system) that is capable of autonomous water quality surveillance.

To ensure robust detection, segmentation, and classification of different foreground ob-jects (such as algae, pollens, or dust) from the background, the final version of the envi-ronment will use several different data sources including volumetric data (obtained via digital color holography) and material obtained with color and fluorescent microscopy.

My framework was successfully used to segment the input provided in the form of a video frame sequence obtained using color light microscopy and fluorescent microscopy [102]. To ensure fast computation, the planned back-end system will be powered by specially designed many-core processors that again could be employed by my algorithm.

Since the feature space my system works on is also a matter of selection, additional channels gained by various sensors is a possible way to further improve segmentation accuracy.

References

[1] D. Comaniciu and P. Meer. Mean Shift: A Robust Approach Toward Feature Space Analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence,24(5):603–619, 2002. 2, 11, 18, 19, 21, 24, 26, 39, 53, 89, 92, 94, 95, 100

[2] R. C. Gonzalez and R. E. Woods.Digital Image Processing, Second Edition.

Prentice Hall, 2002. 10

[3] A. M. Treisman and G. Gelade. A Feature-Integration Theory of Attention. Cognitive Psychology,12(1):97–136, 1980. 10

[4] I. Biederman.Recognition-by-components: A Theory of Human Image Understanding. Psychological Review,94(2):115, 1987. 10

[5] J. Mutch and D. G. Lowe. Object Class Recognition and Localization using Sparse Features with Limited Receptive Fields. International Journal of Computer Vision,80(1):45–57, 2008. 10

[6] L. J. Li, R. Socher, and L. Fei-Fei. Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2036–2043. IEEE, 2009. 10, 79, 108

[7] A. K. Mishra, Y. Aloimonos, L. Cheong, and A. Kassim. Active Visual Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,34(2):639–653, 2012. 10, 11, 29

[8] E. Borenstein and J. Malik. Shape Guided Object Segmentation. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, pages 969–976. IEEE Computer Society, 2006. 10 [9] P. Arbel´aez, B. Hariharan, C. Gu, S. Gupta, L. Bourdev, and

J. Malik. Semantic Segmentation using Regions and Parts. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012.

10

[10] R. Zhao and W. I. Grosky. Bridging the Semantic Gap in Image Retrieval. Distributed Multimedia Databases: Techniques and Applications, pages 14–36, 2001. 10

[11] X. M. He, R. S. Zemel, and D. Ray. Learning and Incorporating Top-Down Cues in Image Segmentation. In Proceedings of the 2006 European Conference on Computer Vision, pages I: 338–351, 2006. 10

[12] L. Dong and E. Izquierdo. A Biologically Inspired System for Classification of Natural Images.IEEE Transactions on Circuits and Systems for Video Technology,17(5):590–603, May 2007. 10

[13] E. Borenstein and S. Ullman. Combined Top-down/Bottom-up Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 2109–2125, 2007. 10

[14] F. Shahbaz Khan, J. van de Weijer, and M. Vanrell. Top-down Color Attention for Object Recognition. In Proceedings of the 12th IEEE International Conference on Computer Vision and Pattern Recognition, pages 979–986. IEEE, 2009. 10

[15] T. Malisiewicz and A. A. Efros.Improving Spatial Support for Objects via Multiple Segmentations. In Proceedings of the 2007 British Machine Vision Conference. The British Machine Vision Association, 2007. 11

[16] P. Arbel´aez, M. Maire, C. Fowlkes, and M. Malik.Contour Detection and Hierarchical Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5):898–916, 2011. 11, 13, 17, 29, 33, 37, 90, 92, 93, 94, 105, 109

REFERENCES

[17] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence,22(8):888–905, 2000.

11, 14

[18] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2):167–

181, September 2004. 11, 13

[19] M. B. Salah, A. Mitiche, and I. B. Ayed. Multiregion Image Segmentation by Parametric Kernel Graph Cuts. IEEE Transactions on Image Processing,20(2):545–557, 2011. 11, 14

[20] C. Nikou, N. P. Galatsanos, and A. C. Likas. A Class-adaptive Spatially Variant Mixture Model for Image Segmentation. IEEE Transactions on Image Processing,16(4):1121–1130, 2007. 11, 16

[21] X. Ren and J. Malik.Learning a Classification Model for Segmentation.

In Proceedings of the 9th IEEE International Conference on Computer Vision, pages 10–17. IEEE, 2003. 11, 15

[22] A. Hinneburg and D. A. Keim. Optimal Grid-Clustering: Towards Breaking the Curse of Dimensionality in High-Dimensional Clustering.

In Proceedings of the 25th International Conference on Very Large Data Bases, page 517. Morgan Kaufmann Publishers Inc., 1999. 12

[23] S. X. Yu. Segmentation Induced by Scale Invariance. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, pages 444–451. IEEE Computer Society, 2005. 13

[24] D. R. Martin, C. C. Fowlkes, and J. Malik.Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues.

IEEE Transactions on Pattern Analysis and Machine Intelligence,26(5):530–549, 2004. 13, 54

[25] M. Maire, P. Arbelaez, C. Fowlkes, and J. Malik. Using Contours to Detect and Localize Junctions in Natural Images. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008. 13

[26] B. C. Catanzaro, B. Su, N. Sundaram, Y. Lee, M. Murphy, and K. Keutzer. Efficient, High-quality Image Contour Detection. In Proceedings of the 12th IEEE International Conference on Computer Vision, pages 2381–2388. IEEE, 2009. 13

[27] J. Wassenberg, W. Middelmann, and P. Sanders. An Efficient Parallel Algorithm for Graph-based Image Segmentation. In Computer Analysis of Images and Patterns, pages 1003–1010. Springer, 2009. 13

[28] P. Strandmark and F. Kahl. Parallel and Distributed Graph Cuts by Dual Decomposition. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition, pages 2085–2092. IEEE, 2010. 14 [29] G. L. Miller and D. A. Tolliver. Graph Partitioning by Spectral

Rounding: Applications in Image Segmentation and Clustering. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, pages 1053–1060. IEEE, 2006. 15

[30] W. Y. Chen, Y. Song, H. Bai, C. J. Lin, and E. Y. Chang. Parallel Spectral Clustering in Distributed Systems.IEEE Transactions on Pattern Analysis and Machine Intelligence,33(3):568–586, 2011. 15

[31] A. P. Moore, S. Prince, J. Warrell, U. Mohammed, and G. Jones. Superpixel Lattices. InProceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2008. 15

[32] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J.

Dickinson, and K. Siddiqi. TurboPixels: Fast Superpixels Using Geometric Flows. IEEE Transactios on Pattern Analysis and Machine Intelligence,31(12):2290–2297, December 2009. 16

[33] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. S¨usstrunk. SLIC Superpixels. Technical report, EPFL Technical Report, 2010. 16 [34] C. Y. Ren and I. Reid. gSLIC: A Real-time Implementation of SLIC

Superpixel Segmentation. Technical report, University of Oxford, Department of Engineering Science, 2011. 16

REFERENCES

[35] C. Nikou, A. C. Likas, and N. P. Galatsanos. A Bayesian Framework for Image Segmentation with Spatially Varying Mixtures. IEEE Transactions on Image Processing,19(9):2278–2289, 2010. 16

[36] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry. Unsupervised Segmentation of Natural Images Via Lossy Data Compression.

Computer Vision and Image Understanding, 110(2):212–225, 2008. 16, 92, 93, 105, 109

[37] S. Paris and F. Durand. A Topological Approach to Hierarchical Segmentation using Mean Shift. InProceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2007. 17, 25, 39, 43, 94 [38] J. A. Hartigan and M. A. Wong. Algorithm AS 136: A k-means Clustering Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics),28(1):100–108, 1979. 18

[39] S. H. Roosta. Parallel Processing and Parallel Algorithms: Theory and Computation. Springer, 1999. ISBN-10: 0387987169, ISBN-13: 978-0387987163.

18

[40] Y. Cheng.Mean Shift, Mode Seeking, and Clustering.IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8):790–799, 1995. 18, 19, 23, 47

[41] K. Fukunaga and L. Hostetler. The Estimation of the Gradient of a Density function, with Applications in Pattern Recognition. IEEE Transactions on Information Theory,21(1):32–40, 1975. 18

[42] D. DeMenthon. Spatio-temporal Segmentation of Video by Hierarchical Mean Shift Analysis. In Language, 2, page 1. Center for Automation Research, University of Maryland, College Park, 2002. 23, 39 [43] C. Yang, R. Duraiswami, N. A. Gumerov, and L. Davis. Improved Fast

Gauss Transform and Efficient Kernel Density Estimation. InProceedings of the 9th IEEE International Conference on Computer Vision, pages 664–671, 2003. 23, 39

[44] C. Yang, R. Duraiswami, D. DeMenthon, and L. Davis. Mean-shift Analysis Using Quasi-Newton Methods. InProceedings of the International Conference on Image Processing, pages 447–450, 2003. 23, 39

[45] B. Georgescu, I. Shimshoni, and P. Meer.Mean Shift Based Clustering in High Dimensions: A Texture Classification Example. InProceedings of the 9th IEEE International Computer Vision Conference, pages 456–463, 2003.

23, 39

[46] M. A. Carreira-Perpi˜n´an.Acceleration Strategies for Gaussian Mean-Shift Image Segmentation. InProceedings of the 2006 IEEE Computer Society Conference Computer Vision and Pattern Recognition, 1, pages 1160–1167, 2006.

24, 39, 43

[47] M. A. Carreira-Perpi˜n´an. Gaussian Mean-Shift is an EM Algorithm.

IEEE Transactions on Pattern Analysis and Machine Intelligence,29(5):767–776, 2007. 24, 39

[48] M. A. Carreira-Perpi˜n´an. Fast Nonparametric Clustering with Gaussian Blurring Mean-shift. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 153–160, New York, NY, USA, 2006. ACM. 24, 39

[49] Q. Luo and T. M. Khoshgoftaar. Unsupervised Multiscale Color Image Segmentation based on MDL Principle. IEEE Transactions on Image Processing,15(9):2755–2761, 2006. 24, 39, 94

[50] D. Comaniciu. An Algorithm for Data-driven Bandwidth Selection.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):281–

288, 2003. 24, 39

[51] P. Wang, D. Lee, A. G. Gray, and J. M. Rehg. Fast Mean Shift with Accurate and Stable Convergence. Journal of Machine Learning Research -Proceedings Track,2:604–611, 2007. 24, 39

[52] J. Wang, B. Thiesson, Y. Xu, and M. Cohen. Image and Video Segmentation by Anisotropic Kernel Mean Shift. In T. Pajdla and

REFERENCES

J. Matas, editors, Proceedings of the 2004 European Conference on Computer Vision, 3022 of Lecture Notes in Computer Science, pages 238–249. Springer Berlin, Heidelberg, 2004. 25

[53] H. Guo, P. Guo, and H. Lu. A Fast Mean Shift Procedure with New Iteration Strategy and Re-sampling. In Proceedings of the 2006 IEEE International Conference on Systems, Man and Cybernetics, 3, pages 2385–2389, 2006. 25, 39, 43

[54] A. Pooransingh, C. A. Radix, and A. Kokaram. The Path Assigned Mean Shift Algorithm: A New Fast Mean Shift Implementation for Colour Image Segmentation. In Proceedings of the 15th IEEE International Conference on Image Processing, pages 597–600. IEEE, 2008. 25, 39, 71

[55] F. Zhou, Y. Zhao, and K. Ma. Parallel Mean Shift for Interactive Volume Segmentation. InF. Wang, P. Yan, K. Suzuki, and D. Shen, ed-itors,Machine Learning in Medical Imaging, 6357ofLecture Notes in Computer Science, pages 67–75. Springer, 2010. 25, 39, 43, 64

[56] C. Xiao and M. Liu. Efficient Mean-shift Clustering Using Gaussian KD-Tree. Computer Graphics Forum,29(7):2065–2073, 2010. 26, 39, 64 [57] D. Freedman and P. Kisilev. Fast Mean Shift by Compact Density

Representation. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 1818–1825, 2009. 26, 39

[58] D. Freedman and P. Kisilev. KDE Paring and a Faster Mean Shift Algorithm. SIAM Journal on Imaging Sciences,3(4):878–903, 2010. 26, 39, 43 [59] K. Zhang and J. T. Kwok. Simplifying Mixture Models Through Function Approximation.IEEE Transactions on Neural Networks,21(4):644–

658, 2010. 26, 39, 64

[60] C. M. Christoudias, B. Georgescu, and P. Meer. Synergism in Low Level Vision. In Proceedings of the 16th IEEE International Conference on Pattern Recognition, 4, pages 150–155. IEEE, 2002. 26, 90, 93, 94, 95, 100, 109

[61] H. Zhang, J. E. Fritts, and S. A. Goldman. Image Segmentation Evaluation: A Survey of Unsupervised Methods. Computer Vision and Image Understanding,110(2):260–280, 2008. 27, 34

[62] D. R. Martin, C. C. Fowlkes, D. Tal, and J. Malik. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In Proceedings of the 8th IEEE International Conference on Computer Vision, 2, pages 416–423, July 2001. 28, 29, 33, 48, 105, 108, 109

[63] S. Bagon, O. Boiman, and M. Irani.What is a Good Image Segment? A Unified Approach to Segment Extraction.Proceedings of the 2008 European Conference on Computer Vision, pages 30–44, 2008. 29

[64] T. Kadir and M. Brady. Saliency, Scale and Image Description.

International Journal of Computer Vision,45(2):83–105, 2001. 29, 30, 31, 72, 75 [65] S. Khan and M. Shah. Object Based Segmentation of Video Using Color, Motion and Spatial Information. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, pages II–746. IEEE, 2001. 29

[66] S. B. Wesolkowski. Color Image Edge Detection and Segmentation: A Comparison of the Vector Angle and the Euclidean Distance Color Similarity Measures. PhD thesis, University of Waterloo, 1999. 29, 81

[67] X. Jiang, C. Marti, C. Irniger, and H. Bunke. Distance Measures for Image Segmentation Evaluation. EURASIP Journal on Applied Signal Processing,2006:209–209, 2006. 29

[68] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward Objective Evaluation of Image Segmentation Algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence,29(6):929–944, 2007. 29, 105, 109 [69] S. Alpert, M. Galun, A. Brandt, and R. Basri. Image Segmentation

by Probabilistic Bottom-Up Aggregation and Cue Integration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34:315–327, February 2012. 29, 33, 38, 95, 105, 109

REFERENCES

[70] K. Yanai and K. Barnard. Image Region Entropy: A Measure of Visualness of Web Images Associated with One Concept. InProceedings of the 13th Annual ACM international Conference on Multimedia, pages 419–422.

ACM, 2005. 31

[71] B. Moghaddam, H. Biermann, and D. Margaritis. Defining Image Content with Multiple Regions-of-interest. InProceedings of the 1999 IEEE Workshop on Content-Based Access of Image and Video Libraries, pages 89–93.

IEEE, 1999. 32

[72] N. Abbadeni, D. Zhou, and S. Wang. Computational Measures Corresponding to Perceptual Textural Features. In Proceedings of the 2000 International Conference on Image Processing, 3, pages 897–900. IEEE, 2000. 32

[73] S. Alpert, M. Galun, R. Basri, and A. Brandt. Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration. In Proceedings of the of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, June 2007. 33

[74] H. El-Rewini and M. Abd-El-Barr. Advanced Computer Architecture and Parallel Processing (Wiley Series on Parallel and Distributed Computing). Wiley, 2005. ISBN-10: 0471467405, ISBN-13: 978-0471467403. 42

[75] C. Jia, W. Xiaojun, and C. Rong. Parallel Processing for Accelerated Mean Shift Algorithm. Journal of Computer Aided Design and Computer Graphics, (3):461–466, 2010. 43, 64

[76] C. Tomasi and R. Manduchi. Bilateral Filtering for Gray and Color Images. InProceedings of the 6th International Conference on Computer Vision, pages 839–846. IEEE, 1998. 70, 75

[77] K. Bitsakos, C. Ferm¨uller, and Y. Aloimonos. An Experimental Study of Color-based Segmentation Algorithms Based on the Mean-shift Concept. Proceedings of the 2010 European Conference on Computer Vision, pages 506–519, 2010. 70, 89

[78] C. R. Maurer Jr, R. Qi, and V. Raghavan.A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions. IEEE Transactions on Pattern Analysis and Machine Intelligence,25(2):265–270, 2003. 76

[79] A. Hoover, G. Jean-Baptiste, X. Jiang, P. J. Flynn, H. Bunke, D. B. Goldgof, K. Bowyer, D. W. Eggert, A. Fitzgibbon, and R. B.

Fisher. An Experimental Comparison of Range Image Segmentation Algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(7):673–689, 1996. 79, 108

[80] A. Duarte, A. S´´ anchez, F. Fern´andez, and A. S. Montemayor. Improving Image Segmentation Quality Through Effective Region Merging using a Hierarchical Social Metaheuristic. Pattern Recognition Letters,27(11):1239–1251, 2006. 79, 108

[81] S. Beucher. Watershed, Hierarchical Segmentation and Waterfall Algorithm.Mathematical Morphology and its Applications to Image Processing, pages 69–76, 1994. 79, 108

[82] F. Y. Shih and S. Cheng. Automatic Seeded Tegion Growing for Color Image Segmentation. Image and Vision Computing, 23(10):877–886, 2005.

79, 108

[83] R. D. Dony and S. Wesolkowski. Edge Detection on Color Images Using RGB Vector Angles. In Proceedings of the 1999 IEEE Canadian Conference on Electrical and Computer Engineering, 2, pages 687–692. IEEE, 1999. 81

[84] T. Carron and P. Lambert. Color Edge Detector Using Jointly Hue, Saturation and Intensity. In Proceedings of the 1994 IEEE International Conference on Image Processing, 3, pages 977–981. IEEE, 1994. 82, 83

[85] T. H. Kim, K. M. Lee, and S. U. Lee. Learning Full Pairwise Affinities for Spectral Segmentation. InProceedings of the of the 2010 IEEE Conference on Computer Vision and Pattern Recognition, pages 2101–2108. IEEE, 2010. 92, 93

REFERENCES

[86] B. Varga and K. Karacs. High-resolution Image Segmentation Using Fully Parallel Mean Shift. EURASIP Journal on Advances in Signal Processing,2011(1):111, 2011. 94, 107

[87] J. S. Kim and K. S. Hong. A new Graph Cut-based Multiple Active Contour Algorithm without Initial Contours and Seed Points. Machine Vision and Applications,19(3):181–193, 2008. 94

[88] A. Prati, I. Mikic, M. M. Trivedi, and R. Cucchiara.Detecting Moving Shadows: Algorithms and Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence,25(7), 2003. 97

[89] H. D. Cheng, X. H. Jiang, Y. Sun, and J. Wang. Color Image Segmentation: Advances and Prospects.Pattern Recognition,34(12):2259–

2281, 2001. 97

[90] M. Everingham, L. van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. PASCAL 2008 Results. 2008. 105, 109

[91] N. Chinchor and B. Sundheim. MUC-5 Evaluation Metrics. In Proceedings of the 5th Conference on Message Understanding, pages 69–78.

Association for Computational Linguistics, 1993. 105, 109

[92] B. Bartell, G. Cottrell, and R. Belew. Optimizing Parameters in a Ranked Retrieval System Using Multi-query Relevance Feedback.

In Proceedings of the 1994 Symposium on Document Analysis and Information Retrieval (SDAIR). Citeseer, 1994. 105, 109

[93] The MathWorks Inc. MATLAB. Accessed June, 2012. 106 [94] AccelerEyes LLC. Jacket. Accessed June, 2012. 106

[95] NVIDIA Corporation. CUDA. Accessed August, 2012. 106

[96] The Microsoft Corporation.Microsoft Excel 2010. Accessed June, 2012.

106

[97] B. Varga and K. Karacs. GPGPU Accelerated Scene Segmentation Using Nonparametric Clustering. In Proceedings of the 2010 International Symposium on Nonlinear Theory and its Applications, pages 149–152, 2010. 107 [98] M. Radv´anyi, B. Varga, and K. Karacs.Advanced crosswalk detection for the Bionic Eyeglass. In Proceedings of the 12th International Workshop on Cellular Nanoscale Networks and Their Applications, pages 1–5. IEEE, 2010.

107, 111

[99] B. Varga and K. Karacs.Towards a Balanced Trade-off Between Speed and Accuracy in Unsupervised Data-driven Image Segmentation.

Machine Vision and Applications, 2012. Under review. 108

[100] K. Karacs, A. Lazar, R. Wagner, D. B´alya, T. Roska, and M. Szuhaj. Bionic Eyeglass: An Audio Guide for Visually Impaired. InProceedings of the 2006 IEEE Biomedical Circuits and Systems Conference, pages 190–193.

IEEE, 2006. 111

[101] S. T˝okes, V. Szab´o, L. Orz´o, P. Div´os, and Z. Krivosija. Digital Holographic Microscopy and CNN Based Image Processing for Biohazard Detection. In Proceedings of the 11th International Workshop on Cellular Neural Networks and Their Applications, pages 8–8. IEEE, 2008. 111 [102] B. Varga. Color-based Object Segmentation for Water Quality Surveillance.

Thesis for Informatics Specialist in Bionic Computing, P´azm´any P´eter Catholic University, 2011. 111

Appendix A

Additional High Resolution Evaluation Examples for the Parallel System

This chapter contains two additional examples for the output of the parallel segmenta-tion framework evaluated in Subsecsegmenta-tion 3.4.2. Figures A.1 and A.2 show the results of the segmentation and merging phases utilizing the two corresponding system settings on high resolution input images.

Inputimage

“Shore”

Speed setting Quality setting

Clustermapafter modeseeking

N K = 730 N K = 690

Clustermapafter merging

N K = 77 N K = 55

Outputbefore mergingMergedoutput

Figure A.1: A second high resolution segmentation example from the 15 image corpus used for the evaluation of the parallel framework. For the sake of better visibility, the

Inputimage

“Shower”

Speed setting Quality setting

Clustermapafter modeseeking

N K= 710 N K= 670

Clustermapafter merging

N K= 200 N K= 141

Outputbefore mergingMergedoutput

Figure A.2: A third high resolution segmentation example from the 15 image corpus used for the evaluation of the parallel framework. For the sake of better visibility, the extent

Appendix B

Additional BSDS Evaluation Examples for the Adaptive System

This chapter contains three additional examples for the output of the adaptive segmen-tation framework evaluated in Subsection 4.5.1.

InputimageSegmentedimage

m= 228 m= 333 m= 231

MergingusingdAE

m= 26 m= 96 m= 11

Output (usingdAEanddN)

m= 23, F = 0.66 m= 95, F = 0.62 m= 9, F = 0.32

Groundtruth reference

Figure B.1: Additional segmentation examples from the test set of the BSDS500. The first row contains the input images, rows 2 to 4 show the results obtained at the end of certain stages of the procedure. The number of clusters is denoted bym, the F-measure of the segmentation output is denoted byF. Row 5 shows the boundaries of multiple ground truth segmentations provided as reference, with different colors. Segmentation was done using the OPTIMAL parametrization.

Appendix C

Additional High Resolution Evaluation Examples for the Adaptive System

This chapter contains twenty additional examples for the output of the adaptive seg-mentation framework evaluated in Subsection 4.5.2.

Inputimage

κ= 1.714 κ= 2.143 κ= 2.143 κ= 2.214

SegmentedimageMergedimage

t= 13.90s t= 15.15s t= 11.65s t= 10.42s

Referenceimage

t= 135.37s t= 335.24s t= 101.88s t= 108.68s

Figure C.1: Additional segmentation examples from the high resolution image set. Input images are found in row 1, row 2 shows the output of the segmentation phase using the HD-OPTIMAL parametrization. Merged images can be seen in row 3, finally, row 4 contains the segmentation results of the reference system using the high setting. For the sake of visibility, cluster boundaries marked with black or white (depending on which is more

Inputimage

κ= 2.143 κ= 2.500 κ= 2.571 κ= 2.786

SegmentedimageMergedimage

t= 8.27s t= 24.91s t= 9.60s t= 12.08s

Referenceimage

t= 153.25s t= 337.58s t= 388.47s t= 132.56s

Figure C.2: Additional segmentation examples from the high resolution image set. Input images are found in row 1, row 2 shows the output of the segmentation phase using the HD-OPTIMAL parametrization. Merged images can be seen in row 3, finally, row 4 contains the segmentation results of the reference system using the high setting. For the sake of visibility, cluster boundaries marked with black or white (depending on which is more

Inputimage

κ= 2.929 κ= 3.000 κ= 3.000 κ= 3.214

SegmentedimageMergedimage

t= 11.74s t= 22.27s t= 12.88s t= 18.00s

Referenceimage

t= 104.50s t= 155.54s t= 694.40s t= 264.10s

Figure C.3: Additional segmentation examples from the high resolution image set. Input images are found in row 1, row 2 shows the output of the segmentation phase using the HD-OPTIMAL parametrization. Merged images can be seen in row 3, finally, row 4 contains the segmentation results of the reference system using the high setting. For the sake of

Figure C.3: Additional segmentation examples from the high resolution image set. Input images are found in row 1, row 2 shows the output of the segmentation phase using the HD-OPTIMAL parametrization. Merged images can be seen in row 3, finally, row 4 contains the segmentation results of the reference system using the high setting. For the sake of