Archives for Distributed learning

03 Aug

Hands-On Guide To Custom Training With Tensorflow Strategy

image-24842
image-24842

Distributed training in TensorFlow is built around data parallelism, where we can replicate the same model architecture on multiple devices and run different slices of input data on them. Here the device is nothing but a unit of CPU + GPU or separate units of GPUs and TPUs. This method follows like; our entire data is divided into equal numbers of slices. These slices are decided based on available devices to train; following each slice, there is a model to train on that slice.

The post Hands-On Guide To Custom Training With Tensorflow Strategy appeared first on Analytics India Magazine.

19 Dec

Meet MACH, A Distributed Learning Breakthrough For Extreme Classification Problems

image-9006
image-9006

There is a new approach to tackling the problem of training computer for ‘extreme classification problems’ like answering the general questions — the Merged-Average Classifiers vis Hashtag (MACH) approach. This divide-and-conquer approach to machine learning can cut the time and computational resources required in dealing with the extreme classification problems. In a large dataset, like…

The post Meet MACH, A Distributed Learning Breakthrough For Extreme Classification Problems appeared first on Analytics India Magazine.