Energy markets unstable, and so are forecasts of the near-term future
by Chris Rachwal
With half the world still in lockdown and heated debates on the pace of exit strategies to get economies moving, it is vital we learn from recent performance of the sector and underlying trends that drive it.
Most of today’s attention is on huge imbalances within the oil market, yet natural gas markets have been out of balance between supply and demand for at least 18 months. US gas rig count peaked at 202 in January 2019 with Henry Hub (HH) at $3.05/MMbtu (EIA), then sharply declined to 85 rigs in April 2020 (HH <$2). In spite of this, oil-related associated gas helped create peak gas production in November 2019 and remains close to that level.
Working gas in storage was below the five-year norm for winter 2018-2019, so this oil/gas dynamic helped more gas come to market than gas prices per se would have directly signaled. Shale gas production ceased to grow since late 2019, while US gas demand is broadly similar to last year; however, LNG export volumes are significantly below January peak and well below installed export capacity.
The next few months may start to create greater stability in gas, albeit at different price levels. It seems a little longer will be needed before US and Global oil markets achieve such a balance?
Crude Oil – What is a fair market share for US shale?
US commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 9.0 million barrels from the previous week, and SPR increased by 1.15 million barrels, according to the EIA. At 527.6 million barrels, US crude oil inventories are about 10% above the five-year average for this time of year, but are not hitting tank tops yet. US domestic crude production dropped another 100,000 barrels per day last week to a current level of 12.1 million barrels per day. In addition, at least another 300,000 barrels per day of oil shut-ins are expected in May and June, as rig count continues to fall.
Global oil demand in April hit a low of ~72 million barrels per day amid the COVID-19 pandemic. This sudden and significant demand shock is the biggest challenge for achieving some form of balance in the oil market at the moment. Although the spread of the virus has not visibly slowed down in North America and many other parts of the world, discussion has already started on what the world will look like after this disastrous global event. Some suspect that the demand for travel may never recover to the pre-pandemic level, as the fear for mass gatherings will continue and will reduce tourism and business travel. These could shrink travel demand significantly since people now are more than ever used to working, and connecting with others, from remote locations. Transportation accounts for roughly one-third of the world energy consumption, and thus any change in travel behavior will have medium to long term impact on the oil and gas industry.
On the supply side, the long-run picture also has much uncertainty. Even though the collapse of crude price in March and April are generally characterized as a result of Saudi Arabia and Russia failing to reach an agreement to curtail production, some would argue that the US is the one that really needs to reduce production. There is no doubt that, in the past decade, production from shale and tight formations in the US has structurally changed the competitive landscape of global crude supply in addition to gas and natural gas liquids (NGL). The US shale boom had two distinctive phases, each about five years. Initially, US domestic production rose from 5.5 million barrels per day in 2010 to 8.5 million barrels per day in the summer of 2014. In this phase, Bakken and Eagle Ford led production growth, and the breakeven prices of shale projects could be as high as $60-80 per barrel. Then, WTI dived from over $100 per barrel to an average of $45 to $65 per barrel level after 2015, yet US production increased to 13 million barrels a day at the beginning of 2020, adding another 4.5 million barrels a day within five years. This second phase of remarkable growth was mainly led by activities in the Permian, and it made the US one of the top oil producers in the world. Not to mention that production of associated natural gas has also seen record increases in this period.
Two factors played a key role during the shale boom in the past decade: continuous technological advancement and abundant access to capital. US light tight oil (LTO) producers continue to innovate in drilling and fracking technologies. Higher efficiencies turned into better economics of shale assets year on year. Although a sub-$20 per barrel breakeven would be reported every now and then for one or two excellent wells, large-scale drilling programs cannot be expected to reach such a level of profitability. Therefore, at current price level, it is likely that most drilling programs will not be profitable, which is why US rig count is dropping like a stone.
Adding to the profitability challenges, the capital market is not as open to the upstream sector as a couple of years ago. By now, many investors have learnt the hard lesson that available drilling locations do not necessarily translate into profitable production. Considering that the majority of LTO producers have not, or have only just, started to generate positive cash flow, it is not a surprise that recent price drops put yet more stress on their already heavily leveraged balance sheets.
The reduction in oil demand because of the COVID-19 pandemic is apparently not the only difficulty facing the US shale industry. Domestically, LTO producers will probably keep consolidating, just like many “mergers of equals” happened in 2018 and 2019. In the long run, shale assets have to compete with other capital-intensive upstream projects around the globe. As Mr. Daniel Yergin said, “Companies go bankrupt, but rocks don’t go bankrupt.” We are yet to see at what level US production will settle after this period of adjustments.
Carbon Management – Houston, we've had a [data quality] problem
by Nigel Jenvey
As oil and gas companies feel the social and financial pressure to report on their GHG emissions, the industry also has an emissions data quality problem that it is trying to solve. Houston, the energy capital of the world, saw last month the largest ever decline in oil demand, and also marked the 50th anniversary of the infamous Apollo 13 space flight transmission, “Houston, we’ve had a problem” (yes I too thought it was present tense, but that is Hollywood).
The majority of the oil and gas industry’s GHG emissions data is derived from desktop calculations based on general emissions factors for equipment, not real-world measurements. When real-world emissions data is collected, it can of course be different to that calculated because of various environmental or operational factors. The industry is improving on this paradigm with 24/7 sensors and digital solutions, given underlying inaccurate or missing data can compromise the integrity and credibility of reported information.
Methane emissions are a good example. Around the world, research reveals that methane emissions inventories consistently underestimate, and in some cases overestimate, actual emissions. We simply cannot build trust on inaccurate data. The Environmental Defense Fund’s report, “Hitting the mark: Improving the Credibility of Industry Methane Data,” is an important resource for the industry, and explores three critical actions that must be taken to improve data accuracy and earn stakeholder confidence:
1) Integrate direct – real world – measurement into emissions estimates.
2) Increase the transparency and granularity of methane emissions reporting.
3) Validate reported methane data through a qualified and independent third-party audit.
The good news is that the industry, despite the downturn, is continuing to work on improving guidelines for reporting GHG’s, and frameworks that deliver assertions and ‘limited assurance’ certifications. These new processes use scientific principles to ensure that the underlying data used are accurate, transparent and consistent. This will ensure all relevant issues have been dealt with in a factual and coherent manner, using a standardized and documented method for calculation, have a clear audit trail with disclosure of assumptions, data sources and uncertainties quantified and reduced as far as practicable. The end result will be a scientifically derived, probability based estimate.
Carbon Intensity is also a metric where there is no standard or industry approach to ensure such accuracy, transparency and consistency. A review by GaffneyCline of the top 10 oil and gas companies on their readiness for a low carbon transition as determined by the Carbon Disclosure Project (CDP), found that while all had started reporting on the carbon intensity of their business, no calculation was comparable because of differences in units used. These include changes in boundaries (operated, operated control, equity), variations in completeness (Scope 1, Scope 1&2, Scope 1,2&3, inclusion of all GHG’s or a focus on CO2 and CH4), and contrasting approaches to accuracy and transparency (some verified with independent but limited assurance, some internal only).
The business case for improving data quality is clear. If we understand and manage what we measure, then the industry can make more informed decisions about which activities will be economic and what solutions are available over time to reduce carbon-intensity. This will secure business models and asset valuations. It will secure our social license and enable sustainable investment and lending decisions to be made with confidence. Better data underpins Oil and Gas 2.0.
So major improvements in the data quality of carbon emissions performance in oil and gas are available. Reporting is becoming more transparent and accurate. Houston, we’ve had a data quality problem. Let’s make sure Hollywood gets the past tense right. Contact us to find out more about being on the right side of history.