TensorFlow 101

The Anatomy of Deep Learning

Anna Alexandra Grigoryan
3 min readJul 1, 2019

What is TensorFlow?

TensorFlow is an open source library developed by the Google Brain Team, originally created for tasks that require heavy numerical computations. This is why it is very useful for machine learning applications. It has a C++ backend and is able to run faster than pure Python code.

TensorFlow application uses a structure known as a data flow graph. A data flow graph is a graph model for computer programs that expresses possibilities for concurrent execution of program parts, the assignments and references to variables are represented by the nodes, and information flow is represented by the arcs.

Advantages

  • Provides both a Python and a C++ API. (Python API is more complete and it’s generally easier to use)
  • Fast compilation compared to the alternative deep learning libraries.
  • Supports CPUs, GPUs, and even distributed processing in a cluster. (NN can be trained using CPU and multiple GPUs, making the models efficient on large-scale systems)

Structure

A data flow graph, has two basic units: nodes and edges.

Nodes represent a mathematical operation, and the edges represent multi-dimensional arrays. A Tensor is a multidimensional array. It can be zero dimensional, such as a scalar value, one dimensional, i.e. line or vector, or 2-dimensional, i.e. a Matrix, and so on. This is very helpful while dealing with images, due to the information encoded in images, e.g. height, weight, colors, etc..

The matrix of features is a term used in machine learning to describe the list of columns that contain independent variables to be processed, including all lines in the dataset. These lines in the dataset are called lines of observation.

Feature matrix is a placeholder. A placeholder is simply a variable that we will assign data to at a later date. Placeholders can be seen as “holes” through which you can pass the data from outside of the graph. Placeholders allow to create our operations in the graph, without needing the data. When we want to execute the graph, we have to feed placeholders with our input data. This is why we need to initialize placeholders before using them.

As TensorFlow variables are used to share and persist some values, so when you define a place-holder or variable, TensorFlow adds an operation to your graph, by multiplying the “Weight matrix” and “Feature matrix. After that, Add operation is called, which adds the bias term. The output of each operation is a tensor. The resulting tensors of each operation crosses the next one until the end where it’s possible to get the desired result. After adding all these operations in a graph, we can create a session to run the graph, and perform the computations.

Why is TensorFlow good for Deep learning?

  • Built-in support for deep learning and neural networks (it’s easy to assemble a net, assign parameters,and run the training process)
  • Collection of simple, trainable mathematical functions that are useful for neural networks.
  • Deep learning as a gradient-based machine learning algorithm will benefit from TensorFlow’s auto-differentiation and optimizers.

--

--

Anna Alexandra Grigoryan
Anna Alexandra Grigoryan

Written by Anna Alexandra Grigoryan

red schrödinger’s cat thinking of doing something brilliant

No responses yet