Proof. In the protocol in Figure 2, for each internal node k ∈ [m] in the decision tree, the server is
effectively running the comparison protocol from [28] twice, once with blinding factor s
0
k
= 0 and
once with s
0
k
= 1. As in the proof of Theorem 3.1, we can appeal to the analysis of the comparison
protocol from [28, §4.1] and conclude that the bits b
0
k
for k ∈ [m] that the client computes in the
real game satisfy the relation s
0
k
⊕ b
0
k
= 1
x
i
0
k
< t
0
k
. Next, let b
0
denote the bit string b
0
1
· · · b
0
m
.
Then, b
0
is the decision string of x on T
0
, and so the leaf with index φ(b
0
) (as defined in Section 2.2)
in T
0
contains the value T
0
(x) = T (x). By construction of the protocol, for each k ∈ [m] if the
client sets b
0
k
= 1, then it is able to decrypt one of the entries in
˜
B
(1)
k
and learn the key κ
k,1
.
Conversely, if the client sets b
0
k
= 0, then it is able to decrypt one of the entries in
˜
B
(0)
k
and learn
κ
k,0
. In particular, the client obtains keys κ
k,b
0
k
for all k ∈ [m]. This allows the client to compute
the value used to blind z
φ(b
0
)
in the blinded response vector and learn T (x).
Before proceeding with the proof of Theorem 4.1, we first characterize the distribution of values in
the blinded response vector [ˆz
0
, . . . , ˆz
m
]. In particular, we show that for any bit string b ∈ {0, 1}
m
,
given the set of keys {κ
k,b
k
}
k∈[m]
corresponding to the bits of b, all but one entry in the blinded
response vector is uniformly random over Z
p
. More formally, take a complete binary tree T with
depth d. Let m = 2
d
−1 denote the number of internal nodes in T . For k ∈ [m], choose κ
k,0
, κ
k,1
r
←−
Z
p
. Next, for i = 0, 1, . . . , m, let b
1
· · · b
d
be the binary representation of i, and let i
1
, . . . , i
d
be the
indices of the nodes in the path induced by b
1
· · · b
d
in T . Define z
i
to be
z
i
=
M
j∈[d]
h(κ
i
j
,b
j
), (2)
where h is a pairwise independent hash function from Z
p
onto {0, 1}
`
(for some output length `
where ` < blog
2
pc). Intuitively, we associate a uniformly random key with each edge in T (or
equivalently, two keys with each internal node). The value z
i
is the xor of the keys associated with
each edge in the path from the root to leaf i. Now, we state the main lemma on the distribution of
each z
i
conditioned on knowing exactly one of the two keys associated with each internal node.
Lemma B.2. Fix a complete binary tree T with depth d. Let m = 2
d
− 1 and for each 0 ≤ i ≤ m,
construct z
i
according to Eq. (2) where κ
k,b
r
←− Z
p
for k ∈ [m] and b ∈ {0, 1}. Take any bit-string
b ∈ {0, 1}
m
and let α = φ(b) be the index of the leaf node in the path induced by b in T . Then, for
all i 6= α, the conditional distribution of z
i
given {κ
k,b
k
}
k∈[m]
is independent of the other entries
{z
k
}
k6=i
and statistically close to uniformly random over {0, 1}
`
.
Proof. We argue by induction in the depth of the tree. When d = 1, there are exactly two keys κ
1,0
and κ
1,1
. Moreover, z
0
= h(κ
1,0
) and z
1
= h(κ
1,1
). Since κ
1,0
and κ
1,1
are independently uniform
over Z
p
, and h is pairwise independent, the claim follows by the leftover hash lemma.
For the inductive step, take a tree T of depth d > 1 and a bit-string b ∈ {0, 1}
m
. Let α be the
index of the leaf node in the path induced by b in T . Let
ˆ
T be the complete subtree of T (rooted
at the same node) of depth d − 1. For i = 0, . . . , 2
d−1
, let
ˆ
b
1
· · ·
ˆ
b
d−1
be the binary representation
of i and let
ˆ
i
1
, . . . ,
ˆ
i
d−1
be the indices of the nodes in the path induced by
ˆ
b
1
· · ·
ˆ
b
d−1
in
ˆ
T . Then,
define the value ˆz
i
=
L
j∈[d−1]
h(κ
ˆ
i
j
,
ˆ
b
j
). Set ˆm = 2
d−1
− 1, and
ˆ
b = b
1
· · · b
d−1
. Let ˆα be the index
of the leaf node in the path induced by
ˆ
b in
ˆ
T . Invoking the induction hypothesis, we have that for
all i 6= ˆα, the conditional distribution of ˆz
i
given {κ
k,b
k
}
k∈[ ˆm]
is statistically close to the uniform
31