Sparse Data Alternatives With Neural Network Embeddings

databythebay (1)

Last week, we had the opportunity to participate at the Data by the Bay Conference, which is the first data grid conference matrix with six vertical application areas spanned by multiple horizontal data pipelines, platforms, and algorithms. Data by the Bay unifies data science and data engineering to examine what really works to run businesses at scale.

At the conference, I presented “Sparse Data Alternatives With Neural Network Embeddings”  with co-presenters and skymind contributors David Ott and Marvin Bertin, who, along with Michael Ulin, have been working to develop the “Like2Vec” algorithm over the past six months.  

The advent of continuous neural word representation technologies such as Word2Vec has transformed how data scientists and machine learning experts work with natural language data. One reason these algorithms are so successful is that they offer an efficient information preserving methodology to highly compress native features (e.g., word frequencies) to the dimensions of the embedded vector space. This is particularly effective in the sparse data context of word count frequencies.

Recently, word embedding algorithms have been generalized to generic graph networks contexts. In the talk, we released initial results on performance of what we refer to as the Like2Vec algorithm, which applies this generalization to alternative sparse data contexts such as user-based as well as item-based recommender algorithms.