Measures of Semantic Similarity

Semantic similarity can be easily understood as “how much a word A is related to the word B?” Determining semantic similarities often comes up in applications of Natural Language Processing. In this blog, I will elaborate on some well-known algorithms with their key characteristics.

Path Length

Path Length is a score denoting a count of edges between two words in the shortest path. The shorter the path between two words/senses in a thesaurus hierarchy graph, the more similar they are. A thesaurus hierarchy graph is a tree drawn from a broader category of words to narrower category of words. For example, dime and Nickel can be two nodes of coins, and men and women can be two nodes of humans. It is a simple node counting scheme to get to a score:

Simpath (c1, c2) = number of edges in shortest path

Key Characteristics

  • It is very simple
  • It is a path-based measure
  • The score provided is discrete and not normalized
  • This requires tagged data and is hugely dependent on the graph quality
  • It assumes a uniform cost; there is no weight on the graph edges

Leakcock-Chodorow

This is a score denoting count of edges between two words/senses with log smoothing. This is more or less the same as path length with log smoothing, and has the same characteristics except it is continuous in nature due to the log smoothing.

SimLC = -Log (Path Similarity)

Key Characteristics

  • Simple
  • Continuous
  • Required tagged data and dependent on the graph quality
  • Assumes a uniform cost; there is no specific weight on the graph edges

Wu & Palmer

This is a score that takes into account the position of concepts c1 and c2 in the taxonomy relative to the position of the Least Common Subsumer (c1, c2). It assumes that the similarity between two concepts is the function of path length and depth, in path-based measures.

The Least Common Subsumer of two node,s v and w, in a tree or directed acyclic graph (DAG) T is the lowest (i.e. deepest) node that has both v and w as descendants, where we define each node to be a descendant of itself (so if v has a direct connection from w, w is the lowest common ancestor).

Simwup (c1, c2) = (2* Dep(LCS(c1, c2))) / (Len(c1, c2) + 2*dep(LCS(c1, c2)))

LCS(c1, c2) = Lowest node in hierarchy that is a hypernym of c1, c2.

Key Characteristics

  • Continuous and normalized
  • The score can never be zero
  • Heavily dependent on the quality of the graph
  • No distinction between similarity/relatedness

Resnik Similarity

This is a score denoting how similar two word senses are, based on the Information Content (IC) of the Least Common Subsumer.

Information content is the frequency counts of concepts as found in a corpus of text. The frequency associated with a concept is incremented in WordNet each time that the concept is observed, as are the counts of the ancestor concepts in the WordNet hierarchy (for nouns and verbs). Information Content can only be computed for nouns and verbs in WordNet, since these are the only parts of speech where concepts are organized in hierarchies.

SimResnik (c1, c2) = IC(LCS(c1, c2))

LCS(c1, c2) = Lowest node in hierarchy that is a hypernym of c1, c2.

IC(c) = -logP(c)

Key Characteristics

  • Value will always be greater than or equal to zero
  • Refines path-based approach using normalizations based on hierarchy depth
  • Relies on structure of thesaurus
  • Dependent on information content; the result is dependent on the corpus used to generate the information content and the specifics of how the information content was created
  • IC-based similarity results are better than path-based

Lin Similarity

This is a score using both the amount of information needed to state the commonality between the two concepts and the information needed to fully describe these terms.

SimLin = 2 * IC(LCS(c1, c2)) / (IC(c1) + IC(c2))

Key Characteristics

  • Refines path-based approach using normalizations based on hierarchy depth
  • Relies on structure of thesaurus
  • Dependent on information content; the result is dependent on the corpus used to generate the information content and the specifics of how the information content was created
  • IC-based similarity results are better than path-based

Jiang-Conrath Distance

This is a score using both the amount of information needed to state the commonality between the two concepts and the information needed to fully describe these terms. It is similar to Lin Similarity.

SimJCN = 1/distJC(c1, c2)

distJC(c1, c2) = 2 * log P(LCS(c1, c2)) – (log P(c1) + log P(c2))

Key Characteristics

  • Refines path-based approach using normalizations based on hierarchy depth
  • Relies on structure of thesaurus
  • Dependent on information content; the result is dependent on the corpus used to generate the information content and the specifics of how the information content was created
  • IC-based similarity results are better than path-based
  • Care must be taken to handle distJC = 0 scenario

References

I hoe you enjoyed reading this. If you have any questions or queries, please leave a comment below. I highly appreciate your feedback!

Manoj Bisht

Manoj Bisht

Senior Architect

Manoj Bisht is the Senior Architect at 3Pillar Global, working out of our office in Noida, India. He has expertise in building and working with high performance team delivering cutting edge enterprise products. He is also a keen researcher and dive deeps into trending technologies. His current areas of interest are data science, cloud services and micro service/ serverless design and architecture. He loves to spend his spare time playing games and also likes traveling to new places with family and friends.

Leave a Reply

Related Posts

Designing the Future & the Future of Work – The I... Martin Wezowski, Chief Designer and Futurist at SAP, shares his thoughts on designing the future and the future of work on this episode of The Innovat...
The 4 Characteristics of a Healthy Digital Product Team Several weeks ago, I found myself engaged in two separate, yet eerily similar, conversations with CEOs struggling to gain the confidence they needed t...
Recapping Fortune Brainstorm Tech – The Innovation Eng... On this episode of The Innovation Engine, David DeWolf and Jonathan Rivers join us to share an overview of all the news that was fit to print at this ...
4 Reasons Everyone is Wrong About Blockchain: Your Guide to ... You know a technology has officially jumped the shark when iced tea companies decide they want in on the action. In case you missed that one, Long Isl...
The Connection Between Innovation & Story On this episode of The Innovation Engine, we'll be looking at the connection between story and innovation. Among the topics we'll cover are why story ...