神经网络代写|CS代写|计算机代写

Neural Network - Assignment

1. Assignment Overview

In this homework assignment, you will implement a multi-layer perceptron (MLP) neural network and use it to classify data from four different datasets. Your implementation will be made from scratch, using no external libraries other than Numpy; machine learning libraries are NOT allowed (e.g. Scipy, TensorFlow, Caffe, PyTorch, Torch, mxnet, etc.).

2. Data Description

You will train and test your neural network implementation on four datasets inspired by the TensorFlow Neural Network Playground (https://playground.tensorflow.org). We encourage you to visit this site and experiment with various model settings and datasets. The train and test files for each dataset represent an 80/20 train/test split. You are welcome to aggregate the data from each set and re-split to your liking. All datasets have 2-dimensional data points (the x,y coordinates of the point in R2), along with binary labels (either 0 or 1).


3. Task description

Your task is to implement a multi-hidden-layer neural network learner (see model description part for additional details), that will do the following. For a given dataset,

3.1. Construct and train a neural network classifier using provided labeled training data,

3.2. Use the learned classifier to classify the unlabeled test data,

3.3. Output the predictions of your classifier on the test data into a file in the same directory,

3.4. Finish in 2 minutes (for both training your model and making predictions).


4. Model description

The model you will implement is a vanilla feed-forward neural network, possibly with many hidden layers (see Figure 2 for a generic depiction). Your network should have 2 input nodes and output a single value. Beyond this, there are no constraints on your model’s structure; it is up to you to decide what activation function, number of hidden layers, number of nodes per hidden layer, etc your model should use. It’s worth noting you should be using cross-entropy as your loss function (each dataset presents a binary classification task). Depending on your implementation, you may also need to employ the softmax function on your last-layer outputs and select a single value for your final output.


5. Implementation Guidance

Here are a few suggestions you might want to consider during your implementation:

5.1. Train your model using mini-batches: there are many good reasons to use mini-batches to train your model (instead of individual points or the entire dataset at once), including benefits to performance and convergence.

5.2. Initialize weights and biases: employ a proper random initialization scheme for your weights and biases. This can have a large impact on your final model.

5.3. Loss function: as mentioned, you need to use cross-entropy as your loss function.

5.4. Use back propagation: hardly needs mentioning, but you should be using back propagation along with a gradient descent-based optimization algorithm to update your network’s weights during training.

5.5. Vectorize your implementation: vectorizing your implementation can have a large impact on performance. Use vector/matrix operations when possible instead of explicit programmatic loops.

5.6. Regularize your model: leverage regularization techniques to ensure your model doesn’t overfit the training and keeps model complexity in check. This can be especially important in settings with noisy data (which you will face on both the public and hidden grading datasets).

5.7. Plot your learning curve: plotting your train/test accuracy after each epoch is a quick and helpful way to see how your network is performing during training. Here you are allowed to use external plotting libraries, but worth noting that you should likely remove them prior to submission for performance reasons. The figure on the right shows a generic example of such a plot; your plot(s) may look different.

5.8. Putting it all together: see Figure 3 on the next page for a basic depiction of an example training pipeline. Note that this diagram lacks detail and is only meant to provide a rough outline for how your training loop might look.


咨询 Alpha 小助手,获取更多课业帮助。