TensorFlow library in Python

TensorFlow is a machine learning library in which we make neural network models. The exciting fact about TensorFlow is that it uses tensors with a Tensor processing unit (TPU), which is helpful in efficient computations. Several other libraries, such as PyBrain, Pandas, and Keras, provide functionalities to build a neural network from scratch. TensorFlow provides many functions. So it is not easy to cover them all.

Functions of TensorFlow

Various functions are present in this library, such as creating tensors to evaluate the model's performance. The importance of TensorFlow is illustrated below in the flowchart.

Functions of TensorFlow
Functions of TensorFlow

Creating tensors

The tf.constant() function creates tensors with fixed values and can be used as input data for various deep-learning operations in TensorFlow. In the code below, we create tensors in different dimensions.

import tensorflow as tf
tensor1 = tf.constant(5)
print(tensor1)
tensor2 = tf.constant([76, 128, 256])
print(tensor2)
tensor3 = tf.constant([[7, 11], [19, 31]])
print(tensor3)

Variables

Unlike constants created using tf.constant(), variables can be updated and trained during model optimization. In the code below, we are creating a variable using tf.Variable(). The tf.random.normal() function generates random values with a normal distribution to initialize the weights variable.

import tensorflow as tf
Weights = tf.Variable(tf.random.normal(shape=(10, 10)))

Operations

The code initializes two constant tensors x and y with values 2 and 3, respectively. It then performs two operations: add_op and multiplication multiply_op on these tensors using TensorFlow's functions for element-wise addition and multiplication.

import tensorflow as tf
a = tf.constant(12)
b = tf.constant(144)
add_operation = tf.add(a, b)
multiply_operation = tf.multiply(a, b)

Data pipelines

We can set up a data pipeline with the help of TensorFlow. The function tf.data.Dataset.from_tensor_slices() combines the input_data and labels tensors into pairs along the first dimension, treating them as corresponding data and labels. Moreover, we can shuffle the data with the help of train_dataset.shuffle provided by TensorFlow. Have a look at the syntax of calling these functions.

import tensorflow as tf
training_data = tf.data.Dataset.from_tensor_slices((input_data, labels))
training_data = training_data.shuffle(buffer_size=1024).batch(batch_size=64)

Neural network layers

We can define layers in the neural network using TensorFlow. tf.keras.layers.Dense() takes two arguments: the number of neurons in the layer and the activation function. In the code below, we define the layer with activation relu and 64 neurons.

import tensorflow as tf
layer = tf.keras.layers.Dense(units=32, activation='relu')
print(layer)

Optimizers

We use optimizers in the model to adjust the parameters during the training process to minimize the error. The optimizer finds the optimal set of parameters that best fit the training data and helps the model to converge accurately to the solution. There are various optimizers, such as stochastic gradient descent, AdaGrad, Adam, and Nadam.

In the below code, we set the learning rate to 0.01 while using stochastic gradient descent while training.

import tensorflow as tf
optimizer = tf.keras.optimizers.SGD(learning_rate=0.0001)

Loss functions

Sparse Categorical Cross entropy is used for multi-class classification problems where the target labels are provided as integers in a sparse format. The from_logits=True argument indicates that the input to the loss function will be unnormalized model outputs rather than probability distributions. Have a look at the code to use loss functions TensorFlow.

import tensorflow as tf
loss_function = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

Metrics

TensorFlow provides us an opportunity to evaluate using metrics such SparseCategoricalAccuracy. This metric is commonly used for multi-class classification tasks, where the target labels are integers representing class indices.

import tensorflow as tf
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy()
print(accuracy_metric)

Model creation

After discussing the functions provided by TensorFlow, we can now create our model. Have a look at the below code.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
# Load and preprocess the MNIST dataset
(train_X, train_Y), (test_X, test_Y) = mnist.load_data()
test_Y = to_categorical(test_Y, 10)
train_Y = to_categorical(train_Y, 10)
test_X = test_X.astype('float32') / 255.0
train_X = train_X.astype('float32') / 255.0
# Build the neural network
model = Sequential([
Flatten(input_shape=(28, 28)), # Flatten the 28x28 image to a 1D array
Dense(128, activation='relu'), # Fully connected layer with 128 units and ReLU activation
Dense(64, activation='relu'), # Fully connected layer with 64 units and ReLU activation
Dense(10, activation='softmax') # Output layer with 10 units and softmax activation
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(train_X, train_Y, epochs=5, batch_size=64, validation_split=0.3)
Loss_test, Acc_test = model.evaluate(test_X, test_Y)
print(f"Model accuracy: {Acc_test}")

The explanation of the code is given below.

  • Line 7–21: The code imports the necessary TensorFlow modules, including the Sequential model, Dense and Flatten layers, the MNIST dataset, and utility functions for data preprocessing.

  • Line 24: The model is compiled with the Adam optimizer.

  • Line 27: After compilation, we train the mode.

  • Line 27: Evaluating the performance of the model.

Conclusion

TensorFlow is a powerful machine-learning library. As we have discussed, TensorFlow can do auto-differentiation, help make neural networks, and provide functionalities to perform transfer learning in deep learning. This library has provided abstract functionalities to make models. Through this, we don't need to build models from scratch.

Q

What is the purpose of auto differentiation in TensorFlow?

A)

To create automatic data pipelines

B)

To automatically adjust learning rate during training

C)

To automatically compute gradients for optimization

D)

To automatically visualize model architectures

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved