Issue
I am reading the kernel source code and try to understand the mechanism of ip conntrack. How to understand the function get_next_corpse that increases counter of the nf_conn struct that is found to be cleaned.
static struct nf_conn *
get_next_corpse(struct net *net, int (*iter)(struct nf_conn *i, void *data),
void *data, unsigned int *bucket)
{
struct nf_conntrack_tuple_hash *h;
struct nf_conn *ct;
struct hlist_nulls_node *n;
spin_lock_bh(&nf_conntrack_lock);
for (; *bucket < nf_conntrack_htable_size; (*bucket)++) {
hlist_nulls_for_each_entry(h, n, &net->ct.hash[*bucket], hnnode) {
ct = nf_ct_tuplehash_to_ctrack(h);
if (iter(ct, data))
goto found;
}
}
hlist_nulls_for_each_entry(h, n, &net->ct.unconfirmed, hnnode) {
ct = nf_ct_tuplehash_to_ctrack(h);
if (iter(ct, data))
set_bit(IPS_DYING_BIT, &ct->status);
}
spin_unlock_bh(&nf_conntrack_lock);
return NULL;
found:
atomic_inc(&ct->ct_general.use); //Why ??!
spin_unlock_bh(&nf_conntrack_lock);
return ct;
}
As the ct is found to be cleaned, why need atomic_inc(&ct->ct_general.use)?
Solution
Just for keep consistency for that conntrack for concurrency/preemption.
Like the P/V operation introduced by OS textbook.
Answered By - river Answer Checked By - Clifford M. (WPSolving Volunteer)