Computer Vision 代写|CS代写

 CSI5341 Computer Vision 代写案例


1 Texture Image Comparison

This assignment explores image processing and the multilayer perceptron with texture data. The idea here is that we don’t see all texture classes during training. Instead we want to find out if two texture images show the same texture or not. We will be using the Kylberg Texture Dataset v. 1.0 by Dr. Gustaf Kylberg [1] at the Centre for Image Analysis, Upp- sala University, Sweden. The original database contains 28 texture classes and is available at https://www.cb.uu.se/~gustaf/texture/. However, for this assignment, we are only using the small subset of 6 classes with 40 images each. We will be using two subsets of images: 180 images for training and validation and 60 images for testing. Please note that these image sets are also available on BrightSpace as attachment to this assignment. You are not allowed to use the testset for anything else than for your final assessment of the approaches in Section 1.5.

1.1 Getting Started

You will need to download the two subsets from BrightSpace or from Uppsala University. Un- pack the images in a directory relative to your jupyter notebook called textures. We will be marking your notebook with the data installed in textures/training and textures/testing, and your notebook will have to work with the images at these locations in the corresponding six subdirectories named for the texture (canvas1, cushion1, linsseeds1, sand1, seat2 and stone1). The training and validation data are the images numbered 001 to 030, while the testing images have the numbers 031 to 040. Do not rename images, directories or reorganize the data. You will loose marks if your notebook does not work with images at the expected locations.

1.2 Image Preprocessing [1.0]

You need to write a python function that loads images and preprocesses them as descibed be- low. The images are of size 576 × 576. Build an image pyramid for each image with 3 levels of downsampling by a factor of 2. Utilize histogram equalization and Gaussian smoothing before the downsampling. The final image pyramid size should be [576×576, 228×228, 144×144]. Visualize one example in the notebook.

Utilize the Sobel edge filter on each level of the image pyramid, then summarize the filtered texture image by a histogram with a fixed bin size of 256. The final feature shape, i.e., the complete histogram over all levels for each image should be [3 × 256]. Use this feature space for the following questions. Keep in mind the normalization as you answer the questions.

1.3 Learning-Free Classification

1.3.1 Single Level [2.0]

Use the histogram for the first pyramid level (with shape [1 × 256]) for this question. Generate an overall histogram for each texture category by fusing the histograms from all the textures of the same category in the training set. This can be done by averaging the histograms in each category. Visualize the six histograms for each category in one chart to observe the difference of the distribution. Ideally, you can get a chart similar to Fig 1 (only an example, not for the exact data) which shows that the histograms for most of the classes are well separated.

Define a function to measure the distance between a given image histogram and the histogram for each class. Then use this function to classify unseen images. For example, in Fig 1, the width of the histogram, the peak and the shape are all different. Your function should not contain any training step for this question.

Evaluate your method on both training set and validation set by analyzing the accuracy, recall and precision. Visualize one of the miss-classified sample and briefly discuss your observation.

1.3.2 Multiple Resolution Levels [2.0]

This task is similar as Task 1.3.1, but this time you are asked to use the features from all three pyramid levels. You can either fuse the histograms for the three levels, or directly measure the distance on different levels. Evaluate your method on both training set and validation set by analyzing the accuracy, recall and precision. Discuss your observation between this task in comparison with Task 1.3.1.

1.4 Learning-Based Classification [2.0]

Build a multilayer perceptron model to classify an image by using the image feature pyramid from Task 1.2. The simplest approach is to flatten the input feature from shape [3 × 256] to [1 × 768] For this part of the assignment, you must build and train the Multi-layer perceptron model with scikit-learn, or alternatively with the Keras API of tensorflow. Use the validation set to monitor the training of your classifier. Evaluate your method on both training set and validation set by analyzing the accuracy, recall and precision.

1.5 Classification Comparison [1.0]

Compare the classifier of Sections 1.3 and 1.4 on the test data subset. Consider classifier perfor- mance but also other criteria, e.g., training effort, prediction speed, generalization and robustness. Your brief discussion based on quantifiable criteria need to be contained in your Jupyter notebook.

1.6 Feature Engineering and Discussion [2.0]

Considering the results for Sections 1.3 and 1.4 design an classifier that uses a multilayer perceptron for classification but first uses some form of feature extraction from an image directly instead of using the histogram. Hint: Have a look ar skimage.feature and skimage.filter. In general, you are allowed any skimage function for this part. For this part, you are not allowed more than 1000 features as input to the MLP. Use the same validation set as in to monitor the training of your classifier. Briefly discuss how the new classifier performs compared to the classifier of Section 1.4. Use the test data to support your discussion.

2 Submission

You will need to submit your solution in a Jupyter file, do not submit the image data. Make sure you have run all the cells. All text must be embedded in the Jupyter file, I will not look at separately submitted text files. If your Jupyter file needs a local python file to run, please submit it as well. Assignment submission is only though Virtual Campus by the deadline. No late submissions are allowed, you can submit multiple times but only your last submission is kept and marked.

References

[1] G. Kylberg, Kylberg texture dataset v. 1.0. Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, Sweden, 2011. 

咨询 Alpha 小助手,获取更多课业帮助