← Mock problems

Mock M2 · Modeling the Streaming Video Energy Footprint

Forecasting Scenario analysis Carbon accounting

The problem

Streaming video — Netflix, YouTube, TikTok, Twitch, plus an upcoming wave of 4K and VR content — already accounts for a substantial share of global internet traffic. Each hour streamed consumes energy across data centers, networks, and end-user devices. As resolution and viewership scale, the cumulative carbon impact may rival entire industries.

The International Streaming Sustainability Council (ISSC) needs a forecast model and policy recommendations.

Requirements

  1. Identify and justify the components of the streaming energy footprint (data center, network transit, last-mile, end-user device, encoding overhead, content production). Note which dominate and which are easiest to change.
  2. Build a model that estimates today's annual energy consumption and CO₂ emissions from streaming. State your assumptions explicitly.
  3. Project the footprint to 2035 under three scenarios:
    • BAU — current device mix, current resolution trends.
    • Resolution boom — widespread adoption of 8K and VR.
    • Efficient frontier — aggressive codec improvements (AV2), more renewable grid, device efficiency gains.
  4. Identify the three policy or technical interventions with the largest potential impact. Quantify their effect by modifying your model.
  5. Sensitivity analysis on the model — which inputs drive the most uncertainty?
  6. Write a one-page op-ed (700 words) for a general audience explaining whether streaming is "really" an environmental problem.

Useful starting data (rough)

ItemTypical value
Global streaming traffic share~65% of consumer internet
Energy per GB of streamed data (whole chain)0.03–0.20 kWh/GB (huge disagreement in lit)
SD vs. HD vs. 4K bandwidth~1, 5, 25 Mbps
End-device share of total streaming energy~50–70% (TV >> phone)
Average grid carbon intensity (global)~0.45 kg CO₂/kWh
Codec compression gains~50% per generation (H.264→HEVC→AV1→AV2)
Solution sketch

Base model

$E = \sum_d N_d \cdot h_d \cdot \beta_d \cdot \epsilon_d \cdot (1 + \kappa_{\text{net}}) + N_d \cdot P_d \cdot h_d$

For each device class $d$ (TV, phone, tablet, laptop, VR headset): users $N_d$, hours streamed $h_d$, bitrate $\beta_d$, network energy per bit $\epsilon_d$, network overhead $\kappa_{\text{net}}$, device power $P_d$. Sum gives total kWh. Multiply by grid carbon intensity for CO₂.

Growth model

Don't use pure exponential — it explodes. Use logistic per-region with a saturation $K$ tied to population × max hours. Resolution upgrades are modeled as $\beta_d(t)$ trajectories per scenario. Codec gains modeled as $\beta_d(t) \to \beta_d(t) / \gamma(t)$ where $\gamma$ is cumulative compression.

2035 scenarios (illustrative numbers)

Scenario2035 TWh/yr2035 MtCO₂/yr
BAU~700~250
Resolution boom~1,400~500
Efficient frontier~400~80 (grid cleaner)

(Numbers depend on the assumed kWh/GB and grid trajectory; sensitivity will show why.)

Top 3 interventions (likely)

  1. Default-bitrate policies on mobile — saves traffic without changing device-side energy much. High impact, low cost.
  2. Faster codec adoption (AV2 by 2028) — multiplicative on all device classes.
  3. Grid decarbonization — orthogonal but applies linearly; most net-zero gains here.

Sensitivity

The kWh/GB number is uncertain by ~6×. Sensitivity should show that this uncertainty dominates everything. Conclude from that, not by mumbling — the recommendation should be: "More transparent telemetry from platforms is the cheapest intervention because we can't optimize what we can't measure."

Self-grading focus

  • Did you separate data-center, network, and device energy? They scale differently.
  • Did you account for the huge uncertainty in kWh/GB?
  • Are your scenarios qualitatively different, not just rescaled?
  • Is the op-ed actually persuasive to a skeptic, or just lecture-y?