Saturday, May 9, 2020
Thursday, May 7, 2020
How to Generate Faces Using VAE with Keras?
Variational Autoencoder(VAE) can do many amazing
things if we increase the latent space dimensionality from 2D to
multi-dimensional space for generating faces.
In the previous tutorial, we have learned about building the VAE and trained with MNIST handwritten digits dataset and also done
analysis with testing data. Go through it once.
Welcome to aiRobott, I am Kishor Kumar Vajja. In this tutorial we will learn how to generate celebrity faces using VAE with Keras, and we will
|
Sunday, April 12, 2020
Friday, February 7, 2020
Friday, December 20, 2019
How to build a simple Deep Neural Network using Keras?
It is very easy to build a simple Deep Neural Network using Keras, it requires three things to build it, they are:
1. A Dataset for loading and scaling it.
2. Layers to build the Model.
3. Activation functions and Model class.
after building the Model, we need to compile it with an optimizer and loss function. Now the Model is ready for training with dataset. Next, we will test the model to evaluate it.
we will see in detail of all these steps.
y_train
and y_test are numpy arrays with shape [50000, 1] and [10000, 1] respectively,
containing the integer labels in the range 0 to 9 for the class of each image.
1. A Dataset for loading and scaling it.
2. Layers to build the Model.
3. Activation functions and Model class.
after building the Model, we need to compile it with an optimizer and loss function. Now the Model is ready for training with dataset. Next, we will test the model to evaluate it.
we will see in detail of all these steps.
1. Dataset - ( CIFAR-10 )
For this model we are using CIFAR-10 dataset,it is used for training the Model. Our Deep Neural Network has a input layer with hidden dense layers and output layer, with this Neural Network we can make predictions on a new dataset, this is a supervised learning method.Loading the Dataset for scaling:
Actually images are numpy arrays, so it is required to import numpy package at the beginning. And import the keras dataset package to import CIFAR-10 dataset.
Now we will load CIFAR-10 dataset:
here, x_train and x_test are input datasets for training and testing, they are numpy arrays of shape [50000,
32, 32, 3] and [10000, 32, 32, 3] respectively.
It’s worth noting the shape of the image data in x_train: [50000, 32, 32, 3]. The first dimension of this array references the index of
the image in the dataset, the second and third relate to the size of the image,
and the last is the channel (i.e., red, green, or blue, since these are RGB
images).
By default, image data consists of integers between 0 and 255 for each pixel channel. Similarly, x_test image data pixel values are also between 0 and 255. see below,
y_train and y_test are :
for classifying the output in 10 classes :
NUM_CLASSES is a variable for number of classes.
Neural Networks work best when each input is inside the range -1 to 1, so we need to divide by 255 to x_train and x_test pixel values.
Now check the values for x_train and x_test :
notice x_train and x_test pixel values are converted to floating point values in the range from -1 to 1, see the difference between previous values and current values.
We also need to change the integer labelling of the images to one-hot-encoded vectors of length 10. Using the following code , the new shape of y_train and y_test are therefore [50000, 10] and [10000, 10] respectively.
There are no columns or rows in this dataset; instead, this is a tensor with four dimensions. For example, if we want to know the green channel i.e, 1, and the value of the pixel in the (12, 13) position of an image index of 54, just type like this..
Like this CIFAR-10 dataset downloaded and scaled to build the model.
y_train and y_test are :
for classifying the output in 10 classes :
NUM_CLASSES is a variable for number of classes.
Neural Networks work best when each input is inside the range -1 to 1, so we need to divide by 255 to x_train and x_test pixel values.
Now check the values for x_train and x_test :
notice x_train and x_test pixel values are converted to floating point values in the range from -1 to 1, see the difference between previous values and current values.
We also need to change the integer labelling of the images to one-hot-encoded vectors of length 10. Using the following code , the new shape of y_train and y_test are therefore [50000, 10] and [10000, 10] respectively.
There are no columns or rows in this dataset; instead, this is a tensor with four dimensions. For example, if we want to know the green channel i.e, 1, and the value of the pixel in the (12, 13) position of an image index of 54, just type like this..
Like this CIFAR-10 dataset downloaded and scaled to build the model.
Sunday, December 15, 2019
CIFAR-10 dataset
CIFAR-10
is an established computer-vision dataset used for object recognition. It is a
subset of the 80 million tiny images dataset and consists of 60,000 32x32 color
images containing one of 10 object classes, with 6000 images per class. There
are 50,000 training images and 10,000 test images. It was collected by Alex
Krizhevsky, Vinod Nair and Geoffrey Hinton.
The
dataset is divided into five training batches and one test batch, each with
10,000 images. The test batch contains exactly 1000 randomly selected images
from each class. The training batches contain the remaining images in random
order, but some training batches may contain more images from one class than
another. Between them, the training batches contain exactly 5000 images from
each class.
Here are
the classes in the dataset, as well as 10 random images from each: Airplane,
automobile, bird, cat, deer, dog, frog, horse, ship, truck.
The
classes are completely mutually exclusive. There is no overlap between
automobiles and trucks. “Automobiles” includes sedans, SUVs things of that
sort. “Truck” includes only big trucks Neither includes pickup trucks.
Thursday, December 5, 2019
bisect module in Python3
The bisect module implements an algorithm for inserting elements into a list while maintaining the list in sorted order.
It's output :
The first column of the output shows the new random number. The second column shows the position where the number will be inserted into the list. The remainder of each line is the current sorted list.
Like this, we can manipulate the given data, it might be faster to simply build the list and then sort it once. For long lists, significant time and memory savings can be achieved using this insertion sort algorithm [ i.e, insort( ) ], especially when the operation to compare two members of the list requires expensive computation.
In the above example the result shown a repeated value, 77. The bisect module provides two ways to handle repeats. New values can be inserted either to the left of existing values, or to the right.
The insort( ) function is actually an alias for insort_right( ), which inserts an item after the existing value. The corresponding function insort_left( ) inserts an item before the existing value.
Let's see an example :
Here is the output :
When the same data is manipulated using bisect_left( ) and insort_left( ), the results are the same sorted list but the insert positions are different for the duplicate values.
=============================================================================
(1). Inserting in Sorted Order:
Here is a simple example, in which insort( ) is used to insert items into a list in sorted order.It's output :
The first column of the output shows the new random number. The second column shows the position where the number will be inserted into the list. The remainder of each line is the current sorted list.
Like this, we can manipulate the given data, it might be faster to simply build the list and then sort it once. For long lists, significant time and memory savings can be achieved using this insertion sort algorithm [ i.e, insort( ) ], especially when the operation to compare two members of the list requires expensive computation.
(2). Handling Duplicates:
In the above example the result shown a repeated value, 77. The bisect module provides two ways to handle repeats. New values can be inserted either to the left of existing values, or to the right.
The insort( ) function is actually an alias for insort_right( ), which inserts an item after the existing value. The corresponding function insort_left( ) inserts an item before the existing value.
Let's see an example :
Here is the output :
When the same data is manipulated using bisect_left( ) and insort_left( ), the results are the same sorted list but the insert positions are different for the duplicate values.
=============================================================================
Subscribe to:
Posts (Atom)
-
Variational Autoencoder(VAE) can do many amazing things if we increase the latent space dimensionality from 2D to multi-dimensio...
-
Logistic Regression classifier Logistic regression is a technique that is used to explain the relationship between input variables and...
-
General Problem Solver The General Problem Solver ( GPS ) was an AI program proposed by Herbert Simon , J.C. Shaw , and Alle...