代写Intro to Image Understanding (CSC420) Assignment 2代做留学生SQL 程序
- 首页 >> Algorithm 算法Intro to Image Understanding (CSC420)
Assignment 2
Due Date: October 18th , 2024, 10:59:00 pm
Total: 160 marks
General Instructions:
• You are allowed to work directly with one other person to discuss the questions. How- ever, you are still expected to write the solutions/code/report in your own words; i.e. no copying. If you choose to work with someone else, you must indicate this in your assignment submission. For example, on the first line of your report file (after your own name and information, and before starting your answer to Q1), you should have a sentence that says: “In solving the questions in this assignment, I worked together with my classmate [name & student number]. I confirm that I have written the solu- tions/code/report in my own words” .
• Your submission should be in the form. of an electronic report (PDF), with the answers to the specific questions (each question separately), and a presentation and discussion of your results. For this, please submit a file named report.pdf to MarkUs directly.
• Submit documented codes that you have written to generate your results separately. Please store all of those files in a folder called assignment2, zip the folder and then submit the file assignment2.zip to MarkUs. You should include a README.txt file (inside the folder) which details how to run the submitted codes.
• Do not worry if you realize you made a mistake after submitting your zip file; you can submit multiple times on MarkUs until the deadline.
Part I: Theoretical Problems (55 marks)
[Question 1] (10 marks)
Assume for a transformer model, you have batch size of 16, sequence length of 10, hid- den dimension size of 128, and 8 attention heads. Each head computes scaled dot-product attention.
[1.a] (5 marks) Calculate the total number of dot-product operations.
[1.b] (5 marks) If the attention scores are calculated as explain what the scaling factor √dk is and calculate its value.
[Question 2] (5 marks)
Nowadays in Vision Transformers we use learnable positional embeddings while in the orig- inal transformer paper, the positions are encoded using sine and cosine functions. Explain why learnable parameters are used in Vision Transformers instead of a fixed embedding to encode the positions.
[Question 3] (15 marks)
It is usually expensive to train a Vision Transformer on an image with a resolution of higher than 224x224. One idea to enhance the performance is to pretrain the dataset on this res- olution and finetune on a larger resolution (e.g. 512x512) for a few epochs. However, when finetuning on a higher resolution image, there are new input tokens unseen in pretraining that we have not learned any positional embeddings for them. Explain why it is not a good idea to start from a random positional embedding for these tokens and tune them and propose at least two alternative approaches (Hint: refer to SwinTransformerV2 paper).
[Question 4] (10 marks)
Explain what key-value caching is, and why it helps in causal attentions.
[Question 5] (10 marks)
Assume we want to design K-nearest neighbor algorithm for a regression problem using attention in transformers. What would be the query, key, and value? What modifications are needed to model K neighbours?
[Question 6] (5 marks)
Why do Vision Transformers generally need more data compared to Convolutional Neu- ral networks in training?
Part II: Implementation Tasks (105 marks)
In this question, you begin by training a classification model using ViT-tiny. Then, you mod- ify the ViT architecture by adding a custom token-mixture module. Finally, you evaluate its performance in two scenarios: one where the remaining modules use pretrained weights and another where they are randomly initialized.
The dataset you use in this assignment is called imagenette. It is a subset of 10 easily classified classes from Imagenet (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute). For this assignment, 25% of the actual data is selected that you can download it from this link.
Task I - Training ViT-Tiny (25 marks):
For this task, use the Pytorch Image Models (timm)library to create an instance of ’vit tiny patch16 224’ model and train it from scratch (without pretraining) on imagenette for 10 epochs. It is ex- pected to get at least 40% accuracy.
Task II - Tiny DepthwiseFormer (60 marks):
MetaFormer Is Actually What You Need for Vision shows that transformers can be decom- posed into a token mixer module (i.e. self-attention) that mixes the information among the tokens and channel-wise MLP to update the information within each token. They show that instead of self-attention if you use other token mixer modules (and they call it MetaFormer) you can still get a good performance and they use average pooling to verify that.
In this assignment, we want to use a variant of MetaFormer by modifying the ‘vit tiny patch16 224‘ to use depthwise convolution instead of self-attention.
Depthwise convolution is a type of convolution that each kernel is applied to each channel separately. In example above, you can see that we have three channels in input and hence we have to define 3 kernels so that each kernel in different color is applied to each channel separately and generate the corresponding output feature map. Depthwise convolution has fewer parameters compared to a regular convolution (less chance of overfitting), and it’s com- paratively faster, hence you can see them in efficient architectures such as MobileNet. For using depthwise convolution in PyTorch, you have to use ‘groups‘ parameters when defining a convolution block and set it to the number of input channels.
Create a function called ‘get depthwiseformer tiny(pretrained=False, num classes=10)‘ and modify the ViT tiny after creating an instance from it. More specifically, you have to:
• Replace all the attention blocks in the vision transformer with the depth-wise convolu- tion.
• Since the ViT expects tokens with shape ‘(B, L, C)‘ but the depthwise convolution expects ‘(B, C, H, W)‘, you have to modify the forward of each block in the model by creating a custom forward function that reshapes the tensor into what depthwise convolution expects and then reshape it back. use a custom forward wrapper same as what you learned in tutorial. For looking at the original forward implementation of transformer block, check out the original implementation here.
• Check out the original implementation of Vision Transformer to see where it adds the class token. When using depthwise convolution, we cannot rely on cls token since ‘L = H*W + 1‘ and we cannot reshape when cls token is included. Check out the code to see where it adds the cls token and change the attribute of a model accordingly (after creating instance of the class) to not add the cls token. embeddings of the Vision Transformer also includes one learned positional embedding for the cls token. You have to remove it after creating an instance of the class.
• By default the classifier head of the vision transformer picks the first token as the cls token and does the classification. But in your implementation since you don’t have cls tokens anymore, you have to use average of all the tokens as the input of the classifier. Look at the implementation and check where it has to be modified to do average pooling instead (Hint: Only changing one object attribute is enough.)
Train your code for 10 epochs one time with ‘pretrained=False‘ and the other time with ‘pre- trained=True‘ . Do you see the improvement on ‘pretrained=True‘ even when the depthwise convolutions are trained from scratch? Explain why.
Task III - Mean Shift Tracking (20 marks):
In tutorial G, we learned about the mean shift and cam shift tracking. In this question, we attempt to evaluate the performance of mean shift tracking in a sample case. For this assignment, you can use the attached short video KylianMbappe . mp4 or, alternatively, you can record and use a short (2-3 second) video of yourself. You can use any OpenCV (or other) functions you want in this question and, as always, you can use the tutorial code as a starter.
• Use the Viola-Jones face detector to detect the face on the first frame. of the video. The default detector can detect the face in the first frame. of the attached video. If you record a video of yourself, make sure your face is visible and facing the camera in the first frame. (and throughout the video) so the detector can detect your face in the first frame.
• Construct the hue histogram of the detected face on the first frame. using appropriate saturation and value thresholds for masking. Use the constructed hue histogram and mean shift tracking to track the bounding box of the face over the length of the video (from frame. #2 until the last frame). So far, this is similar to what we did in the tutorial.
• Also, use the Viola-Jones face detector to detect the bounding box of the face in each video frame. (from frame. #2 until the last frame). If multiple faces are detected, select the one that is closest to the one detected in the previous frame.
• Calculate the intersection over union (IoU) between the tracked bounding box and the Viola-Jones detected box in each frame. Plot the IoU over time. The x axis of the plot should be the frame. number (from 2 until the last frame) and the y axis should be the IoU on that frame.
• In your report,include a sample frame. in which the IoU is large (e.g. over 80% or some other reasonable threshold, let’s call this thigh ) and another sample frame. in which the IoU is low (e.g. below 50% or some other reasonable threshold, let’s call this tlow ). Draw the tracked and detected bounding boxes in each frame. using different colours (and indicate which is which).
• Report the percentage of frames in which the IoU is larger than thigh.
• Look at the detected and tracked boxes at frames in which the IoU is small (lower than tlow ) and report which (Viola-Jones detection or tracked bounding box) is correct or more accurate more often (we don’t need a number, just eyeball it). Very briefly (1-2 sentences) explain why that might be.