Following comments in my other answer (and looking again at the title of the OP's question!), here is an not very rigorous theoretical exploration of the issue.
We want to determine whether Bias B(ˆθn)=E(ˆθn)−θB(θ^n)=E(θ^n)−θ may have different convergence rate than the square root of the Variance,
B(ˆθn)=O(1/nδ),√Var(ˆθn)=O(1/nγ),γ≠δ???
B(θ^n)=O(1/nδ),Var(θ^n)−−−−−−−√=O(1/nγ),γ≠δ???
We have
B(ˆθn)=O(1/nδ)⟹limnδE(ˆθn)<K⟹limn2δ[E(ˆθn)]2<K′
B(θ^n)=O(1/nδ)⟹limnδE(θ^n)<K⟹limn2δ[E(θ^n)]2<K′
⟹[E(ˆθn)]2=O(1/n2δ)
⟹[E(θ^n)]2=O(1/n2δ)(1)
while
√Var(ˆθn)=O(1/nγ)⟹limnγ√E(ˆθ2n)−[E(ˆθn)]2<M
Var(θ^n)−−−−−−−√=O(1/nγ)⟹limnγE(θ^2n)−[E(θ^n)]2−−−−−−−−−−−−−√<M
⟹lim√n2γE(ˆθ2n)−n2γ[E(ˆθn)]2<M
⟹limn2γE(θ^2n)−n2γ[E(θ^n)]2−−−−−−−−−−−−−−−−−−√<M
⟹limn2γE(ˆθ2n)−limn2γ[E(ˆθn)]2<M′
⟹limn2γE(θ^2n)−limn2γ[E(θ^n)]2<M′(2)
We see that (2)(2) may hold happen if
A) both components are O(1/n2γ)O(1/n2γ), in which case we can only have γ=δγ=δ.
B) But it may also hold if
limn2γ[E(ˆθn)]2→0⟹[E(ˆθn)]2=o(1/n2γ)
limn2γ[E(θ^n)]2→0⟹[E(θ^n)]2=o(1/n2γ)(3)
For (3)(3) to be compatible with (1)(1), we must have
n2γ<n2δ⟹δ>γ
n2γ<n2δ⟹δ>γ(4)
So it appears that in principle it is possible to have the Bias converging at a faster rate than the square root of the variance. But we cannot have the square root of the variance converging at a faster rate than the Bias.