reverse_mutual_information

Notes on the "Reverse Mutual Information"

We'll start with \[ \mathbb{E}_{\theta \sim p(\Theta)}\left[ D\bigl( p(Y) \bigr\| p(Y|\Theta) \bigr)\right]. \] This is equal to \[ \begin{eqnarray} \sum_y p(y) \sum_\theta p(\theta) \log \frac{ p(y) }{ p(y|\theta) } &= \sum_{y,\theta} p(y)p(\theta) \log \frac{ p(y)p(\theta) }{ p(y, \theta) }. \end{eqnarray} \] Wouldn't the r.h.s. be also a good candidate for “reverse mutual information”? I.e. \[ RI(y; \theta) = \sum_{y,\theta} p(y)p(\theta) \log \frac{ p(y)p(\theta) }{ p(y, \theta) } \geq 0? \]

If this were the case, then your double expected loss gap would be \[ \mathbb{E}_{\theta' \sim P(\Theta)} \mathbb{E}_{\theta \sim P(\Theta)} \biggl[ D\bigl( p(Y|\Theta) \bigl\| p(Y|\Theta') \bigr) \biggr] = RI(Y;\Theta') + I(Y;\Theta). \]

  • reverse_mutual_information.txt
  • Last modified: 2024/08/12 10:22
  • by pedroortega