Author Archives: Pavan Kandru - Page 2
THiNC is a lightweight DL framework that makes model composition facile. It’s various enticing advantages like Shape inference, concise model representation, effortless debugging and awesome config system, makes this a recommendable choice of framework.
The post Guide To THiNC: A Refreshing Functional Take On Deep Learning appeared first on Analytics India Magazine.
Pykeen is a python package that generates knowledge graph embeddings while abstracting away the training loop and evaluation. The knowledge graph embeddings obtained using pykeen are reproducible, and they convey precise semantics in the knowledge graph.
The post Complete Guide to PyKeen: Python KnowlEdge EmbeddiNgs for Knowledge Graphs appeared first on Analytics India Magazine.
DETR(Detection Transformer) is an end to end object detection model that does object classification and localization i.e boundary box detection. It is a simple encoder-decoderTransformer with a novel loss function that allows us to formulate the complex object detection problem as a set prediction problem.
The post How To Detect Objects With Detection Transformers? appeared first on Analytics India Magazine.
MultiSpeaker Text to Speech synthesis refers to a system with the ability to generate speech in different users’ voices. Collecting data and training on it for each user can be a hassle with traditional TTS approaches.
The post Guide to Real-time Voice Cloning: Neural Network System for Text-to-Speech Synthesis appeared first on Analytics India Magazine.
MultiSpeaker Text to Speech synthesis refers to a system with the ability to generate speech in different users’ voices. Collecting data and training on it for each user can be a hassle with traditional TTS approaches.
The post Guide to Real-time Voice Cloning: Neural Network System for Text-to-Speech Synthesis appeared first on Analytics India Magazine.
Quantization is the process of mapping the high precision values (a large set of possible values) to low precision values(a smaller set of possible values). Quantization can be done on both weights and activations of a model.
The post What is Apple’s Quant for Neural Networks Quantization appeared first on Analytics India Magazine.
SDNet is a contextualized attention based deep neural network that achieved State of the Art results in the challenging task of Conversational Question Answering. It makes use of inter attention and self-attention along with Recurrent BIdirectional LSTM layers.
The post Complete Guide to SDNet: Contextualized Attention-based Deep Network for Conversational Question-Answering appeared first on Analytics India Magazine.
What is Transformer XL?
Transformer XL is a Transformer model that allows us to model long range dependencies while not disrupting the temporal coherence.
The post What is Transformer XL? appeared first on Analytics India Magazine.
Medical Transformer relies on a gated position-sensitive axial attention mechanism that aims to work well on small datasets. It introduces Local-Global (LoGo) a novel training methodology for modelling image data efficiently.
The post Guide to Medical Transformer: Attention for Medical Image Segmentation appeared first on Analytics India Magazine.
BoTorch is a library built on top of PyTorch for Bayesian Optimization. It combines Monte-Carlo (MC) acquisition functions, a novel sample average approximation optimization approach, auto-differentiation, and variance reduction techniques.
The post Guide to Bayesian Optimization Using BoTorch appeared first on Analytics India Magazine.