Markov entropy centrality
Markov entropy centrality
, originally called entropy centrality, is an entropy-based node centrality metric derived from a discrete random Markovian transfer process [2]. In this model, an object is transferred from a given node according to the following rules: at each step, the object is either absorbed by the current node with probability \(a\), terminating the process, or passed to one of its neighbors with probability \(1-a\), allowing the process to continue.
The centrality of node \(i\), \(c_{\mathrm{MEC}}(i)\), is quantified by the entropy of the distribution of destinations reached by the object originating from \(i\) after \(t\) transitions:
\[
c_{\mathrm{MEC}}(i) = - \sum_{j=1}^N \left(p_{ij}^t + p_{ij'}^t\right) \log \left(p_{ij}^t + p_{ij'}^t\right),
\]
where \((p_{ij}^t + p_{ij'}^t)\) denotes the probability that the object, starting at node \(i\), is held by node \(j\) after \(t\) steps. The original \(2N \times 2N\) transition probability matrix \(P\) is defined as
\[
p_{ij} =
\begin{cases}
a, & \text{if } j = i',\\
1, & \text{if } i = j = i',\\
\frac{(1-a) a_{ij}}{d_i}, & \text{otherwise,}
\end{cases}
\]
where \(i'\) denotes the absorbing state corresponding to node \(i\), \(a_{ij}\) is the adjacency matrix entry, and \(d_i\) is the degree of node \(i\).
By design, Markov entropy centrality measures a node's potential for information spread: nodes with high entropy can reach a diverse set of destinations with relatively even probability, indicating a structurally influential and versatile role. Conversely, low entropy implies that walks starting from the node are concentrated on a few targets, reflecting lower reach. Experimentally, Nikolaev et al. [2] suggest using \(t=5\) and absorption probability \(a \in [0.1, 0.2]\).