From e7a33478ff1d579237b8635be5ca3db1bf3b1a9e Mon Sep 17 00:00:00 2001 From: Andreas Brandmaier Date: Tue, 8 Dec 2020 11:44:51 +0100 Subject: [PATCH 1/5] - removed DS_Store from figures - added global gitignore to ignore DS_Store files --- .gitignore_global | 1 + figures/.DS_Store | Bin 6148 -> 0 bytes 2 files changed, 1 insertion(+) create mode 100644 .gitignore_global delete mode 100644 figures/.DS_Store diff --git a/.gitignore_global b/.gitignore_global new file mode 100644 index 0000000..e43b0f9 --- /dev/null +++ b/.gitignore_global @@ -0,0 +1 @@ +.DS_Store diff --git a/figures/.DS_Store b/figures/.DS_Store deleted file mode 100644 index 5008ddfcf53c02e82d7eee2e57c38e5672ef89f6..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 6148 zcmeH~Jr2S!425mzP>H1@V-^m;4Wg<&0T*E43hX&L&p$$qDprKhvt+--jT7}7np#A3 zem<@ulZcFPQ@L2!n>{z**++&mCkOWA81W14cNZlEfg7;MkzE(HCqgga^y>{tEnwC%0;vJ&^%eQ zLs35+`xjp>T0 Date: Tue, 8 Dec 2020 11:46:20 +0100 Subject: [PATCH 2/5] fixed data plural --- 1-Neural-Networks-Backpropagation.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/1-Neural-Networks-Backpropagation.ipynb b/1-Neural-Networks-Backpropagation.ipynb index 559c29c..55abc35 100644 --- a/1-Neural-Networks-Backpropagation.ipynb +++ b/1-Neural-Networks-Backpropagation.ipynb @@ -27,7 +27,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 1. Example 2: What if the data is not linearly separable?" + "# 1. Example 2: What if the data are not linearly separable?" ] }, { @@ -36,7 +36,7 @@ "source": [ "In the previous notebook (0-Perceptron-Gradient-Descent.iypnb), we have learned that we can use the perceptron algorithm to build a binary classifier for linearly separable data (by learning a hyperplane).\n", "\n", - "Yet, not all data is linearly separable. Take a look at the figure below and try to find a line that separates the blue and red dots:" + "Yet, not all data are linearly separable. Take a look at the figure below and try to find a line that separates the blue and red dots:" ] }, { @@ -641,9 +641,9 @@ "\n", "- An *output* layer, containing one output neuron for each target class in the dataset\n", "\n", - "In the illustration below, the input data is an image of a single handwritten digit. This image has 8x8 pixels; To serve it to our ANN, we would flatten it to be a single vector of 64 values. The input layer therefore would have 64 input neurons (each representing one of the 64 input features).\n", + "In the illustration below, the input data are an image of a single handwritten digit. This image has 8x8 pixels; To serve it to our ANN, we would flatten it to be a single vector of 64 values. The input layer therefore would have 64 input neurons (each representing one of the 64 input features).\n", "\n", - "As the data consists of single handwritten digits, we would set the output layer to contain 10 neurons (one for each digit between 0 and 9)." + "As the data are single handwritten digits, we would set the output layer to contain 10 neurons (one for each digit between 0 and 9)." ] }, { From 99c61e59cc7b3381b3b643a810cb5bfdae67e20d Mon Sep 17 00:00:00 2001 From: Andreas Brandmaier Date: Tue, 8 Dec 2020 15:29:27 +0100 Subject: [PATCH 3/5] fixed bugs re variable step size --- 0-Perceptron-Gradient-Descent.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/0-Perceptron-Gradient-Descent.ipynb b/0-Perceptron-Gradient-Descent.ipynb index d0bef90..88390f1 100644 --- a/0-Perceptron-Gradient-Descent.ipynb +++ b/0-Perceptron-Gradient-Descent.ipynb @@ -28,7 +28,7 @@ "# 1. Example 1: The iris dataset" ] }, - { + {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -1231,7 +1231,7 @@ " # extract the weights for each gradient step\n", " training_w = np.array(perceptron.training_w)\n", " n_steps = len(training_w)\n", - " steps = np.array([0,1,2,50,n_steps-1])\n", + " steps = np.array([0,1,2,round((n_steps-1)/2),n_steps-1])\n", "\n", " # compute the values of the loss function for a grid of w-values, given our learned bias term\n", " b = float(perceptron.b)\n", @@ -1260,8 +1260,8 @@ " ax.scatter(training_w[0,0], training_w[0,1], color='black', s=200, zorder=99)\n", " ax.plot(training_w[steps,0], training_w[steps,1], color='white', lw=1)\n", " # add line for final weights\n", - " ax.axvline(training_w[s,0], color='red', lw=1, ls='--')\n", - " ax.axhline(training_w[s,1], color='red', lw=1, ls='--')\n", + " ax.axvline(training_w[steps,0], color='red', lw=1, ls='--')\n", + " ax.axhline(training_w[steps,1], color='red', lw=1, ls='--')\n", " # label axes\n", " cbar.set_label('Loss')\n", " ax.set_title('Final loss: {}'.format(perceptron.training_loss[-1]))\n", From 328f65d269d0f6c4fd6e9f42d423bcdd82126991 Mon Sep 17 00:00:00 2001 From: Andreas Brandmaier Date: Tue, 8 Dec 2020 15:29:57 +0100 Subject: [PATCH 4/5] fixed typo --- 0-Perceptron-Gradient-Descent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/0-Perceptron-Gradient-Descent.ipynb b/0-Perceptron-Gradient-Descent.ipynb index 88390f1..2da60c1 100644 --- a/0-Perceptron-Gradient-Descent.ipynb +++ b/0-Perceptron-Gradient-Descent.ipynb @@ -842,7 +842,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 5. (Stochastic) gradinet descent" + "# 5. (Stochastic) gradient descent" ] }, { From 8921f73ab53b991466ed23a619b97eed73a9213f Mon Sep 17 00:00:00 2001 From: Andreas Brandmaier Date: Tue, 8 Dec 2020 15:34:26 +0100 Subject: [PATCH 5/5] removed erroneous change that breaks the notebook --- 0-Perceptron-Gradient-Descent.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/0-Perceptron-Gradient-Descent.ipynb b/0-Perceptron-Gradient-Descent.ipynb index 2da60c1..0d9221b 100644 --- a/0-Perceptron-Gradient-Descent.ipynb +++ b/0-Perceptron-Gradient-Descent.ipynb @@ -28,7 +28,7 @@ "# 1. Example 1: The iris dataset" ] }, - {}, + { "cell_type": "markdown", "metadata": {}, "source": [