Archives for quantization

12 Mar

Are Larger Models Better For Compression

image-10729
image-10729

When OpenAI released its GPT model, it had 1.5 billion parameters and made it the biggest model back then. It was soon eclipsed by NVIDIA’s Megatron, which had 8 billion parameters. Last month Microsoft released the world’s largest language model Turing NLG that has 17 billion parameters. In terms of hardware, any model with more…

The post Are Larger Models Better For Compression appeared first on Analytics India Magazine.

20 Nov

8 Neural Network Compression Techniques For ML Developers

As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications such as online learning and incremental learning In addition, recent years witnessed significant progress in virtual reality, augmented reality, and smart wearable devices, creating challenges in deploying deep learning systems to…

The post 8 Neural Network Compression Techniques For ML Developers appeared first on Analytics India Magazine.

08 May

How To Build Safer Neural Networks In The Age of Smart Devices

image-4388
image-4388

This is a world more or less run on smart devices and safety has become the key feature for leading tech enterprises. There is now a push for complete transition into AI enabled devices for smarter, more efficient features. To pack these palm sized gadgets with the advantages ML brings in, the neural networks are…

The post How To Build Safer Neural Networks In The Age of Smart Devices appeared first on Analytics India Magazine.