Decentralized optimization has emerged as a compelling alternative to centralized methods by allowing agents to bypass the bottleneck of a central aggregator. Instead, agents communicate only with their neighbors to collaboratively optimize a global cost function and reach consensus. This thesis focuses on stochastic decentralized bilevel optimization (DBO), where one optimization problem is constrained by the solution of another. We investigate two distinct settings within DBO. In the first setting, agents in a single network jointly optimize the aggregate of their local objectives. To reduce communication frequency, they perform multiple local updates between communication rounds—a strategy studied in single-level and federated bilevel optimization, but still largely unexplored in fully decentralized bilevel solutions. In the second setting, agents are split into two networks: each network collaboratively solves one level of the bilevel problem, while the interaction between the two follows an inherently sequential Stackelberg game. Unlike prior work, which assumes non-cooperative agents within each team, we explore the effect of intra-team collaboration on the dynamics and outcomes of such hierarchical games. For both settings, we develop single-loop algorithms that simultaneously update the upperand lower-level variables. The one-network method achieves an asymptotic rate of O((KTτ )−1/2), where K is the number of agents and τ the number of local updates, exhibiting linear speedup—that is, the sample complexity improves proportionally with K and τ . The two-network method achieves an asymptotic rate of O(T−1/2 + b2), where the O(b2) term denotes an uncontrollable steady-state error caused by non-i.i.d. agent distributions, highlighting the need for incorporating heterogeneity-correction techniques. In both cases, we provide transient analyses, showing that increasing τ or K, respectively, can slow early-stage convergence.