In the previous model with hidden layer achieved 99.97% of train accuracy, 97.51% of validation accuracy and 97.6% test accuracy.

In this time, we’re going to use dropout to drop, which is a regularization technique for reducing overfitting in artificial neural networks by preventing complex co-adaptations on training data.

You can see DROPOUT = 0.1 is set, which means it will refer 10% to ignoring units which will ignore inputs

Conclusion

Our previous model achieved 99.97% of train accuracy, 97.51% of validation accuracy and 98.26% test accuracy.

--

--

The perceptron algorithm was invented in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. Perceptron mimics neuron, inputs data from other neurons and outputs to other neurons.

Perceptron has its own weights w1, w2, ...,with input x1, x2, …, total sum of inputs x1w1 + x2w2 + … is lower than threshold 𝜃 outputs 0 vice versa, it is called activate.

Without changing structure, by changing weights and threshold we can build AND, NAND and OR gate.

Logic gate using perceptron

Using multiple perceptrons we can also build XOR gate.

--

--

Photo by Caspar Camille Rubin on Unsplash

With Apple’s new M1 architecture based on ARM64, Anaconda does not support M1 natively yet. So to utilize full performance of M1, we’re to install conda with miniforge which has native support on M1.

You could go on Miniforge GitHub, and follow instruction running shell script. But we have easier way to install useful programs easily on MacOS. Homebrew is way to go.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Type above onto a Terminal window and execute. Once it’s done you’re ready to use Homebrew.

Install Miniforge

With Homebrew installed, open terminal and type:

brew install miniforge

After installation is completed, you’re almost ready to use $conda command. You need to source what’s on your .bashrc or .zshrc.

Run conda init zsh and restart terminal you’re done.

--

--