Hands-On Guide To Weights and Biases (Wandb) | With Python Implementation
Everything in Data Science begins with the given data to experiment with and a big amount of time is usually spent on data modeling; tracking all the results and visualizing all the data for every run. Sometimes, this whole process can be a tough grind. Training a model, especially deep learning models is a tedious task. Larger the size of the training model, the more time it will take to run. This causes hindrance in training as experiments with different architectures and hyperparameters can be aggravating when a single run takes several hours or days to complete.
Most experiments which include aggressive training are not even published, and researchers waste resources running the same experiments over and over. Fortunately, plenty of tools and platforms have been developed recently to track the real-time performance of models for different executions. One of such tools is Weights and Biases(Wandb). Wandb organize your and analyze your machine learning experiments. It is lighter than a tensorboard toolkit. With a few lines of code, wandb saves your model’s hyperparameters and output metrics and gives you all visual charts like for training, comparison of model, accuracy, etc. It automatically tracks the state of your code, system metrics and configuration parameters.
Wandb is open source and free for academic research. It supports all of the most common graphs and visualizations and it also provides an API that allows users to extract any information saved during run time.This platform provides a service called Benchmarks that allow people to share their implementation for a specific task. This helps people who are new to a specific task, as this toolkit already saves what all approaches have been done earlier on it and provides the implementation along with its performance scores. It provides many tools for logging such as:
It also provides python API. Its Python API supports many of the popular frameworks like jupyter, pytorch, keras, tensorflow, etc.
Many of the infrastructure supported by it are:
The key features of this platforms are:
- Store hyper-parameters used in a training run
- Search, compare, and visualize training runs
- Analyze system usage metrics alongside runs
- Collaborate with team members
- Replicate historic results
- Run parameter sweeps
- Keep records of experiments available forever
Let’s get started with the implementation part
Requirements
!pip install wandb
- Login into your wandb account
import wandb
!wandb login
It will ask you to provide an API from your wandb profile, click on the link and copy the API and paste it here.
Model Training
Let’s take an example of image classification. You can take any dataset you want. In this session, we will be using a simple Multilayer Perceptron(MLP) model to classify the images of MNIST dataset. We will be using the Pytorch framework along with wandb. Since our main focus is Wandb so the explanation of that particular code is given below, you can check out the MLP model code here. Just before training the model we have to integrate the training model in wandb.
Just before the training, define wandb.init(). It will start a process that syncs metrics in real time. In this particular function, you can give your name, project name and attach notes/tags, if any. Then after creating an object of the model, we have to put that model to watch function so that wandb can log the network. It can be done with the help of wandb.watch(model).
For the next step, we will start training and validating the model normally. After calculating these four values: loss value of training data(loss_train), accuracy of training data(acc_train), loss value of valid data(loss_valid), accuracy value of valid data(acc_valid), we will pass them into wandb.log() to log a dictionary of metrics or custom objects to a step.
#selecting the device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# Initialize WandB
wandb.init(name='MLP method',
project='introductory_project',
notes='This is a introductory project',
tags=['MNIST dataset', 'Test Run'])
# specify loss function (categorical cross-entropy)
error = nn.CrossEntropyLoss()
#initializing the model
model = MLP().to(device)
# specify optimizer
optimizer = optim.Adam(model.parameters())
# Log the network weight histograms (optional)
wandb.watch(model)
num_epochs = 10
start_time = time.time()
for epoch in range(1, num_epochs+1):
#calculation is done with the help of helper functions
loss_train, acc_train = train(model, error, optimizer, train_loader)
loss_valid, acc_valid = validate(model, error, valid_loader)
print(f'\tTrain Loss: {loss_train:.3f} | Train Acc: {acc_train*100:.2f}%')
print(f'\t Val. Loss: {loss_valid:.3f} | Val. Acc: {acc_valid*100:.2f}%')
# Log the loss and accuracy values at the end of each epoch
wandb.log({
"Epoch": epoch,
"Train Loss": loss_train,
"Train Acc": acc_train,
"Valid Loss": loss_valid,
"Valid Acc": acc_valid})
print("Time Elapsed : {:.4f}s".format(time.time() - start_time))
With the completion of training process, the output will provide a link to wandb interface where all your saved metrics have been converted into interactive graph. The graphs below are the models logs, you can check the interactive version of these graphs, here. The model logs will tell you how the metrics of our model changed from epoch to epoch.
And similarly, it will also provide system logs which provide the information like GPU consumption, CPU utilization,etc. An example of it is shown below. You can check out the interactive version, here.
Apart from that, it also saves Model information of our neural network. You can check the demo here.
A very useful section is Logs, which shows all the shell output during the training process. It is incredibly useful to check for warnings and errors even if we don’t have access to the terminal anymore.
And finally the Files section which will contain all the pretrained models.
Now if we replace the MLP model with a simple Logistic Regression model and fit this model to MNIST data set again, then this time wandb gives the comparison charts(grouped charts) of both models applied on the same dataset. Link for the code snippet is here. An example of it shown below. Interactive version of it is available here.
Hyperparameter search through Wandb
Searching for correct hyperparameters in high dimensional space can be tricky sometimes. Hyperparameter Sweep provides an efficient way to do this with just a few lines of code. They enable this by automatically searching through combinations of hyperparameter values (e.g. learning rate, batch size, number of hidden layers, optimizer type) to find the most optimal values. In this section, we will see a hyperparameter sweep tutorial.
For this example, we will be taking the MLP model and MNIST dataset again(discussed above). For hyperparameter sweep, we will define a dictionary called sweep_config containing all the hyperparameters(learning_rate, batch_size, etc) for the given model.
#define a sweep dictionary containing all the hyperparameters
sweep_config = {
'method': 'random', #grid, random
'metric': {
'name': 'loss',
'goal': 'minimize'
},
'parameters': {
'epochs': {
'values': [2, 5, 10, 15]
},
'batch_size': {
'values': [256, 128, 64, 32]
},
'learning_rate': {
'values': [1e-2, 1e-3, 1e-4, 3e-4, 3e-5, 1e-5]
},
'fc_layer_size':{
'values':[128,256,512]
},
'optimizer': {
'values': ['adam', 'sgd']
},
}
}
Now, initialize the sweep.
#Now initialize the sweep
sweep_id = wandb.sweep(sweep_config, project="sweep_introduction")
Now, create a function built_dataset() and add “batch_size” as its parameter. This function will download the MNIST data, transform it into numbers and then divide into the required batch sizes. The code snippet is available here. Now we will create another function train() which will be responsible for fitting the model to the data for all the combinations of hyperparameters and will integrate all the configuration to wandb. For that, we will initialize wandb.init() and will pass the default configuration. Rest of the procedure for training the model is the same as described earlier.
def train():
# Default values for hyper-parameters we're going to sweep over
config_defaults = {
'epochs': 5,
'batch_size': 128,
'learning_rate': 1e-3,
'optimizer': 'adam',
'fc_layer_size': 128,
'dropout': 0.5,
}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Initialize a new wandb run
wandb.init(config=config_defaults)
# Config is a variable that holds and saves hyperparameters and inputs
config = wandb.config
# Define the model architecture
network = nn.Sequential(
nn.Flatten(start_dim=1)
,nn.Linear(784, config.fc_layer_size)
,nn.ReLU()
,nn.Linear(config.fc_layer_size, config.fc_layer_size)
,nn.ReLU()
,nn.Linear(config.fc_layer_size, 10)
)
#building dataset with the given batch_size
train_loader = build_dataset(config.batch_size)
# Define the optimizer
if config.optimizer=='sgd':
optimizer = optim.SGD(network.parameters(),lr=config.learning_rate, momentum=0.9)
elif config.optimizer=='adam':
optimizer = optim.Adam(network.parameters(),lr=config.learning_rate)
network.train()
network = network.to(device)
for i in range(config.epochs):
closs= 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = network(data)
loss = F.nll_loss(output, target)
loss.backward()
closs = closs + loss.item()
optimizer.step()
wandb.log({"batch loss":loss.item()})
wandb.log({"loss":closs/config.batch_size})
Now, just run the sweep agent. This process may take time as wandb will check for different hyperparameters.
#run the sweep agent
wandb.agent(sweep_id, train)
At this link, you can check all the interactive graphs created by wandb. Some of them are discussed below.
Parallel Coordinates Plot : This plot maps hyperparameter values to model metrics. It provides us the quick solution for checking the combination of hyperparameters so as to give the best model performance.
Hyperparameter Importance Plot:The hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.
Conclusion
In this session, we have covered all the basics of Weights and Bias(Wandb). We have seen python implementation of using wandb API in our existing codes. We have also discussed group charts (different models applied on the same dataset). Lastly, we have seen the hyperparameter search through sweep. Some of the advanced topic that wandb covers are:
- Environment variables: Set API keys in environment variables so you can run training on a managed cluster.
- Offline mode: Use dryrun mode to train offline and sync results later.
- On-prem: Install W&B in a private cloud or air-gapped servers in your own infrastructure. We have local installations for everyone from academics to enterprise teams.
- Artifacts: Track and version models and datasets in a streamlined way that automatically picks up your pipeline steps as you train models.
These visualizations can help you save both time and resources and are thereby worthy of further exploration.
Resources and Tutorials used above:
- Website
- Github
- Documentation
- Official Tutorial-1
- Official Tutorial-2
- W&B Basics Colab Notebook
- W&B Sweep Colab Notebook
The post Hands-On Guide To Weights and Biases (Wandb) | With Python Implementation appeared first on Analytics India Magazine.