Tuesday, November 12, 2019

The Lambda Function

The lambda function is a dynamic way of compacting functions inside the code.
For example, the function:
>>> def area(b, h):
...          return 0.5*b*h
...
>>> area(5, 4)
10.0

this function can be compacted by using lambda function, like this:
>>> area = lambda b, h: 0.5*b*h
>>> area(5, 4)
10.0

The zip( ) method

The zip function takes two or more sequences and creates a new sequence of tuples where each tuple contains one element from each list.

let's see an example :
>>> a = [1, 2, 3, 4, 5]
>>> b = ['a', 'b', 'c', 'd', 'e']

now we will use zip( ) function, like this
>>> zip(a, b)
<zip at 0x5fdb7d8>

now we will see the list of resultant list:
>>> list(zip(a, b))
[(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd'), (5, 'e')]
Like this we can use the zip() method.

Saturday, November 9, 2019

NDArray Basics using MXNet

NDArray module is a primary tool of MXNet, it is used for storing and transforming the data. It is just like NumPy's multi-dimensional array. It has some advantages like, NDArrays support asynchronous computation on CPU, GPU and distributed cloud architectures. NDArrays provide support for automatic differentiation. So these advantages make the NDArray indispensable for deep learning.

NDArrays are multi-dimensional arrays of numerical values. NDArrays with one axis corresponds to vectors, two axes to matrices, more than two axes it corresponds to tensors.

to use mxnet in python, you need to install in your PC by typing at command prompt as shown below:
C:\Users\xxxx> pip install mxnet

to get started, let's import mxnet and import ndarray from mxnet.

>>> import mxnet as mx
>>> from mxnet import nd

(1). we can create a simple 1-dimensional array using mxnet from a python list, like this :
>>> x = nd.array([1, 2, 3])
>>> print(x)
[1. 2. 3.]
<NDArray 3 @cpu(0)>

<NDArray 3 @cpu(0)> indicates that x is a one-dimensional array of length 3 and it resides in CPU main memory. The 0 in @cpu(0) has no special meaning and does not represent a specific core.

(2). we can create a 2-dimensional array using mxnet from a python list, like this :
>>> y = nd.array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]])
>>> print(y)
[[1. 2. 3. 4.]
 [1. 2. 3. 4.]
 [1. 2. 3. 4.]]
<NDArray 3x4 @cpu(0)>

(3). we can create an empty 2D array ( also called matrix) with 3 rows and 3 columns like this :
>>> x = nd.empty((3, 3))
>>> print(x)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
<NDArray 3x3 @cpu(0)>
if empty function is used, it grabs some memory and gives us back a matrix without setting the values of any of its entries. This means that the entries can have any form of values.

(4). if we want our matrices to be initialized with zeros, then we have to use .zeros function like this:
>>> x = nd.zeros((3, 3))
>>> print(x)
[[0. 0. 0.]
 [0. 0. 0.]
 [0. 0. 0.]]
<NDArray 3x3 @cpu(0)>

(5). similarly, ndarray has a function to create a matrix of all ones, use .ones function like this:
>>> x = nd.ones((3, 4))
>>> print(x)
[[1. 1. 1. 1.]
 [1. 1. 1. 1.]
 [1. 1. 1. 1.]]
<NDArray 3x4 @cpu(0)>


(6). we can fill with a value ( for Example 7)in a 2D array with 3 rows and 3 columns like this:
>>> x = nd.full((3, 3), 7)
>>> print(x)
[[7. 7. 7.]
 [7. 7. 7.]
 [7. 7. 7.]]
<NDArray 3x3 @cpu(0)>

(7). sometimes, we need to create an array of random values ( this is very common in neural networks ) to use the array as a parameter. For that we can use random_normal function with a zero mean and unit variance form standard normal distribution like this :
>>> y = nd.random_normal(0, 1, shape=(3,4))
>>> print(y)
[[ 1.1630785   0.4838046   0.29956347  0.15302546]
 [-1.1688148   1.558071   -0.5459446  -2.3556297 ]
 [ 0.54144025  2.6785064   1.2546344  -0.54877406]]
<NDArray 3x4 @cpu(0)>

(8). sometimes, you need to copy an array by its shape but not its contents, then use .zeros_like( ) function like this:
>>> z = nd.zeros_like(y)
>>> print(z)
[[0. 0. 0. 0.]
 [0. 0. 0. 0.]
 [0. 0. 0. 0.]]
<NDArray 3x4 @cpu(0)>

(9). you can access the dimensions of array using .shape attribute, like this:
>>> y.shape
(3, 4)

(10). you can access the size of array using .size attribute, like this:
>>> y.size
12

(11). you can query the data type using .dtype, like this:
>>> y.dtype
numpy.float32

float32 is the default data type.

(12). Operations and memory storage of your device can be revealed by using .context attribute, like this:
>>> y.context
cpu(0)




Sunday, November 3, 2019

Linear Algebra - Basics for Deep Learning

Scalars :

using MXNet, we can work with scalars by creating NDArray with just one element.We will see some addition,multiplication,division and exponentiation in this session.

if you have not installed MXNet package, first install in your PC using the following command at cmd prompt, goto command prompt :
C:\Users\ABC> pip install mxnet

let us take two scalars, x and y.

C:\Users\ABC> python

>>> from mxnet import nd

>>> x = nd.array([3.0])
>>> y = nd.array([2.0])

>>> print('x + y = ', x+y)
x + y = [5.]

>>> print('x * y = ', x*y)
x * y = [6.]

>>> print('x / y = ', x/y)
x / y = [1.5]

>>> print('x ** y = ', nd.power(x,y))
x ** y = [9.]

Vectors :

a vector is a (array) list of numbers, for example [1.0, 3.0, 5.0, 2.0]. These numbers are called as scalars, each of the numbers in the vector consists of a single scalar value. We call these values the entries or components of the vector. 

in MXNet, we work with vectors via 1D NDArrays.

>>> x = nd.arange(4)
>>> print('x = ', x)
x = [0. 1. 2. 3.]

for Example if we want the 3rd element in a vector,use
>>> x[3]
[3.]


Length, Dimensionality and Shape :


The length of a Vector is called as its dimension. in NDArray we can access a vector's length using .shape attribute. like this
>>> x.shape
(4,)
The shape is a tuple that lists the dimensionality of the NDArray along each of its axes. Because a vector can only be indexed along one axis, its shape has just one element.

Note that a scalar would have 0 dimensions and a vector would have 1 dimension.
so you can think of 2D array as 2 axes and 3D array as 3 axes, and so on.

let's see some examples,

>>> a = 2
>>> x = nd.array([1,2,3])
>>> y = nd.array([10,20,30])
>>> print(a * x)
[2. 4. 6.]

>>> print(a * x + y)
[12. 24. 36.]


Matrices :

Matrices are 2D arrays, it can be denoted with capital letter like, A, B, C etc.

>>> A = nd.arange(20).reshape((5,4))
>>> print(A)
[[0. 1. 2. 3.]
 [4. 5. 6. 7.]
 [8. 9. 10. 11.]
 [12. 13. 14. 15.]
 [16. 17. 18. 19.]]

we can transpose the matrix through T.

>>> print(A.T)
[[0. 4. 8. 12. 16.]
[1. 5. 9. 13. 17.]
[2. 6. 10. 14. 18.]
[3. 7. 11. 15. 19.]]

Tensors :

Tensors give us a generic way of discussing arrays with an arbitrary number of axes.
for example, Vectors are first-order tensors, and matrices are second-order tensors.

Using tensors, images ( 3D data structures) its axes corresponding to height, width and three (RGB) color channels we can work with it.

>>> X = nd.arange(24).reshape((2, 3, 4))
>>> print('X.shape =', X.shape)
X.shape = (2, 3, 4)

>>> print('X =', X)
X =
[[[ 0.  1.  2.  3.]
  [ 4.  5.  6.  7.]
  [ 8.  9. 10. 11.]]

 [[12. 13. 14. 15.]
  [16. 17. 18. 19.]
  [20. 21. 22. 23.]]]


Basic properties of tensor arithmetic :

for all tensors, multiplication by a scalar produces a tensor of the same shape.

>>> a = 2
>>> x = nd.ones(3)
>>> y = nd.zeros(3)

>>> print(x.shape)
(3,)
>>> print(y.shape)
(3,)
>>> print((a * x).shape)
(3,)
>>> print((a * x + y).shape)
(3,)

Sums and means :

>>> print(x)
[1. 1. 1.]

>>> print(nd.sum(x))
[3.]

>>> print(A)
[[ 0. 1. 2. 3.]
[ 4. 5. 6. 7.]
[ 8. 9. 10. 11.]
[12. 13. 14. 15.]
[16. 17. 18. 19.]]

>>> print(nd.sum(A))
[190.]

Mean : it is an average. 

Mean = sum / total number of elements.

>>> print(nd.mean(A))
[9.5]

>>> print(nd.sum(A) / A.size)
[9.5]

Dot product :

>>> x = nd.arange(4)
>>> y = nd.ones(4)
>>> print(x, y, nd.dot(x, y))
[0. 1. 2. 3.]
[1. 1. 1. 1.]
[6.]

where, nd.dot(x, y) is equivalently to nd.sum(x * y) this gives same result.

Dot products are useful in a wide range of contexts. For Example, given a set of weights, the weighted sum of some values could be expressed as the dot product. 
when the weights are non-negative and sum to one, the dot product expresses a weighted average.
when two vectors each have length one. dot products can also capture the cosine of the angle between them.

Matrix-vector product :

>>> nd.dot(A, x)
[14. 38. 62. 86. 110.]
 Note that the column dimension of A must be the same as the dimension of x.

Matrix-matrix multiplication :

>>> B = nd.ones(shape=(4,3))
>>> nd.dot(A, B)
[[ 6. 6. 6.]
[22. 22. 22.]
[38. 38. 38.]
[54. 54. 54.]
[70. 70. 70.]]

Norms :

Norms are operators in linear algebra, they tell us how big a vector or matrix is. 
we represent norms with a notation ||.||, where the . is just a placeholder.
for example , a vector X is ||X|| and matrix A is ||A||.

l1 norm is simply the sum of the absolute values.
the Euclidean distance sqrt (x1**2+ x2**2+....) is l2-norm.

>>> nd.norm(x)
[3.7416573]

to calculate L1-norm we can simply perform the absolute value and then sum over the elements.

>>> nd.sum(nd.abs(x))
[6.]


Norms and objectives :

In machine learning we are often trying to solve optimization problems: like (a). Maximize the probability assigned to observed data. (b). Minimize the distance between predictions and the ground-truth observations. Assign vector representations to items ( like words, products, or news articles) such that the distance between similar items is minimized, and the distance between dissimilar items is maximized. oftentimes, these objectives, perhaps the most important component of a machine learning algorithm are expressed as norms.