Image Source: Photo by Jon Tyson on Unsplash

Understanding how historical data can lead to algorithmic bias taking a naive example of a compensation prediction model

To be human is to be Biased?

“Bias” is a tendency or inclination to favor or disfavor a certain set over the other. All humans have a certain degree of bias because we are inherently programmed to discern anyone different as a threat. Due to this implicit bias, we tend to unconsciously ascribe traits and qualities to…

Image Source: Photo by on Unsplash

A look into how we used Shopify, Google Analytics & Klaviyo data to increase the Customer Lifetime Value for a Direct-to-consumer E-commerce Start-up.

Where is this going?

Start-ups are engaged in a juggling-act: focusing on high-growth, which Reid Hoffman refers to as blitzscaling, or increasing profitability. A common way to achieve the former is to engage in price-wars and gather more customers. However, the latter goal is a pithy one; centered around the customer lifetime value or…

Photo by Tengyart on Unsplash

A look into a time distributed deep bimodal approach to predict scores for the Big-5 Personality traits based on videos from the First Impression Challenge on Google Colab.

Videos are the New First Impressions!

Think about the approximate number of video calls you have been a part of since March, 2020. Now, compare it to the number of video calls you were a part of before that. I am sure the difference is huge for most of us. …

A look into the need to balance overfitting and underfitting with data augmentation using an application of Image segmentation on satellite images to identify water bodies.

The Effect of Data Augmentation

When training neural networks, data augmentation is one of the most commonly used pre-processing techniques. The word “augmentation” which literally means “the action or process of making or becoming greater in size or amount”, summarizes the outcome of this technique. But another important effect is that it increases or augments…

Comparing the denoising performance of Autoencoders with residual networks across the bottleneck to those without on a sample of RGB images from Flickr.


The official Keras blog, calls autoencoders an example of ‘self-supervised’ algorithms as their targets are generated from the input data. Hence, they are used for tasks of image reconstruction.

The main parts of an autoencoder are: Encoder, Bottleneck and Decoder. The Encoder is extracts image features at each step and…

Source: Photo by NCI on Unsplash

How Transfer Learning gives a head start with limited data and time

Note from the editors: Towards Data Science is a Medium publication primarily based on the study of data science and machine learning. We are not health professionals or epidemiologists, and the opinions of this article should not be interpreted as professional advice. To learn more about the coronavirus pandemic, you…

Metika Sikka

MSBA student at Columbia University

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store