2.2. Calculation Method
The battery cost analysis is a spreadsheet-based methodology. An important first step in the analysis is to estimate battery energy capacities and power requirements for the vehicles to be modeled. Because capacity and power requirements are strongly influenced by vehicle weight, and battery weight is both a function of and a contributor to vehicle weight, sizing the battery requires an iterative solution. This problem is well suited to the iteration function available in common spreadsheet software [
3]. The use of a spreadsheet also makes the analysis easily accessible to public inspection. To this end, further detail on the choice of inputs to this analysis [
3] and access to spreadsheets used in the analysis [
6] are available.
BatPaC [
5] is a spreadsheet-based lithium-ion battery costing model developed by ANL. It employs a rigorous, bottom-up, bill-of-materials approach to battery cost analysis. User inputs to BatPaC include performance goals (power and energy capacity), choice of battery chemistry (for example, Lithium Manganese oxide (LMO) or several varieties of Nickel Manganese Cobalt oxide (NMC)), the vehicle type for which the battery is intended (e.g. PHEV or BEV), the desired number of cells and modules and their layout in the pack, and the volume of production. BatPaC then designs the electrodes, cells, modules, and pack, and provides a complete, itemized cost breakdown [
3]. From this perspective, the main task in specifying a PEV battery pack is to determine the energy storage capacity (kWh) and power capability (kW) that are needed to provide a desired driving range and level of acceleration performance [
3].
The battery cost model upon which BatPaC was based was described in a paper presented at EVS-24 [
7]. ANL later extended the model to include detailed analysis of manufacturing costs for many types of PEVs [
8]. EPA arranged for an independent peer review of the BatPaC model in 2011 [
9]. We used Version 3.0 of BatPaC, provided to EPA on 17 December 2015. EPA continues to work closely with ANL to test new versions of BatPaC and to guide the development of new features [
3].
BatPaC models stiff-pouch, laminated prismatic format cells, placed in double-seamed, rigid modules. The model supports liquid- and air-cooling, accounting for the resultant structure, volume, cost, and heat rejection capacity. It takes into consideration the cost of capital equipment, plant area and labor for each step in the manufacturing process and places relevant limits on electrode coating thickness and other limits applicable to current and near-term manufacturing processes. It also considers annual pack production volume and economies of scale for high-volume production [
3].
2.4. Basis of Battery Energy Capacity Assignment
The next step was the specification of battery capacity needed for a given driving range. Range was modeled as a real-world, EPA-label, 5-cycle fuel economy range by applying a derating factor to an estimated EPA 2-cycle range. For BEVs, range was considered a beginning-of-life criterion, in accordance with EPA range labeling practice. For PHEVs, however, manufacturers are likely to consider mitigating loss of electric range because it will affect the utility factor, a component in the calculation of CO
2 emissions over useful life. The PHEV sizing algorithm therefore reserves a buffer to be used as the battery ages, as described later in
Section 2.5.1.
Battery capacity also depends on the vehicle energy consumption rate. This depends largely on vehicle weight, road load, component efficiencies, and other factors. The process for estimating energy consumption for each PEV was as follows. First its curb weight was estimated as equal to the curb weight CW
base of the corresponding baseline conventional vehicle, modified by any applicable curb weight reduction WR
target (representing a curb weight reduction of 0, 2, 7.5, 10, or 20 percent), and further modified by deletion of the weight of conventional powertrain components (for BEVs) and addition of electric content (for BEVs and PHEVs), as shown in Equations (2) through (5) [
3].
The curb weights CWbase of conventional baseline vehicles were assigned based on average weights for each of the six vehicle classes defined in the EPA baseline fleet that was generated for the broader analysis. The divisions among the classes, being based in part on power-to-weight ratio, are referred to here as “P2W class”. The P2W classes thereby establish target baseline curb weights and power requirements as inputs.
The weight of conventional powertrain components that would not exist on a battery electric vehicle (called “weight delete”, or W
ICE_powertrain) were estimated for each of the six vehicle classes, as an approximate function of power. The weight of electric components (W
electric_content) included an estimated weight for the electric drive (motor and power electronics) as well as the weight of the battery. The weight of this content is computed iteratively by the spreadsheet, because it is strongly influenced by the total weight of the vehicle as well as several other factors. Electric drive weight was based on the targets established by US DRIVE [
12] for the specific power of traction motors and power electronics in the 2020–2025 timeframe, at 1.4 kW/kg combined. An additional weight of 50 pounds was added to BEVs to account for the gearbox.
Battery weight was computed from an estimated battery specific energy (kWh/kg). Specific energy is not a fixed value but will vary depending on the power-to-energy (P/E) ratio of the battery and its gross capacity. Specific energy was provided by a dynamic link to ANL BatPaC, which computes specific energy as one of its outputs.
The “raw” PEV curb weights represented by Equations (4) and (5) are typically significantly larger than the curb weights of the conventional baseline vehicles on which they are based, because the added weight of the battery is typically greater than the weight delete. However, the potential battery cost savings may make PEV mass reduction more cost effective than that represented in the conventional baseline vehicle [
13]. As an approximate but straightforward way to directionally account for this effect, we further constrained the iteration process by forcing CW
BEV or CW
PHEV for each vehicle to match the curb weight of the corresponding baseline vehicle (CW
base_reduced) [
3]. To do so, we solved for the percentage of mass reduction that must be applied to the glider (a vehicle exclusive of powertrain) to offset the additional curb weight. In cases where more than 20 percent glider mass reduction would be needed to fully offset the difference, it was capped at 20 percent and only in these cases was the curb weight of the PEV allowed to be larger than that of its baseline counterpart [
3]. The degree of applied mass reduction is tracked for each vehicle and its cost is included when estimating the total vehicle cost.
In theory, a similar result might have been attained by applying each mass reduction percentage to the glider itself and allowing the resulting total curb weights to be unconstrained. A different set of data points would have resulted, skewed toward cases with little or no mass reduction applied. However, because we expect that mass reduction in PEVs is attractive to manufacturers for its potential to reduce battery cost, data points representing little or no mass reduction are of limited interest. Generating a greater density of points at greater percentages of mass reduction would therefore align better with expected industry practice.
After determining the PEV curb weight (constrained in most cases to match the baseline curb weight, but with a specific degree of applied mass reduction to do so), the method then computes the loaded vehicle weight (also known as inertia weight or ETW) by adding 300 pounds to the curb weight [
3]:
The method then uses this test weight to develop an energy consumption estimate. First, it estimates the fuel economy (mi/gal) for a conventional light-duty vehicle of that test weight by a regression formula derived from the relationship between 2-cycle fuel economy and inertia weight. Compiled data on fuel economy vs. test weight from the EPA Trends Report [
10] provided the primary data source. From this data, we derived a polynomial regression formula for fuel economy (mi/gal) as a function of ETW, as shown in Equation (7) [
3].
An estimate of gross Wh/mile was then computed, assuming 33,700 Wh of energy per gallon of gasoline, as shown in Equation (8):
A series of adjustments was then applied representing assumed differences in energy losses between conventional vehicles and electrified vehicles (this effectively brings the figure into electrified vehicle space) [
3]. Several powertrain efficiencies were estimated to assist in this conversion, including battery discharge efficiency, inverter and motor efficiency, transmission efficiency and other losses (such as wheel bearing, axle, and brake drag losses), and the percentage of energy delivered to the wheels that is used to overcome road loads (that is, the portion of wheel energy that is not later lost to friction braking) [
3]. These efficiencies were selected based on engineering judgement and then optimized in a model calibration step to yield battery capacity estimates in line with the capacities seen in production PEVs of similar specifications.
Estimated road loads appropriate for PEVs were derived from those for conventional vehicles by accounting for reductions in aerodynamic drag and rolling resistance. It was assumed that PEVs would support drag and rolling resistance reductions of 20 percent relative model year 2008 baseline conventional vehicles. Based on simulation models used in the broader analysis, we estimated that a 20 percent reduction in each would reduce PEV road loads to approximately 90.5 percent of the baseline. The effect of reductions in curb weight were inherently represented by use of the ETW regression formula to convert curb weights into base energy consumption estimates [
3].
The combined effect of these steps means that the estimated energy consumption of each PEV is derived from the energy consumption of a corresponding baseline conventional vehicle by applying a ratio of the road loads of the PEV (%Roadload
P/EV) to those of the baseline vehicle (%Roadload
conv = 1) and a ratio of the assumed efficiencies (η) of the respective powertrains, as shown in Equation (9) [
3].
Equation (9) yields an unadjusted (laboratory), weighted, combined two-cycle (55% FTP, 45% HFET) estimate of energy consumption. To convert this to an estimated real-world energy consumption figure, the analysis applies a derating factor. Derating factors are discussed in a later section. As seen in Equation (10), where the derating factor is illustrated with a value of 70 percent as an example, applying the derating factor results in the PEV on-road energy consumption estimate that the method uses to determine the required battery pack capacity for the vehicle [
3].
Finally, as shown by Equation (11), the required battery energy capacity (BEC) is calculated as the on-road energy consumption (Wh/mile) multiplied by the desired range (mi), divided by the usable battery capacity (the usable state-of-charge (SOC) design window). As discussed later, the assumed SOC design window (
SOC%) varied appropriately between BEVs and PHEVs [
3].
The iterative nature of the battery sizing problem means that all the preceding calculations are constructed in a spreadsheet as circular references and performed iteratively by the spreadsheet software until the estimated weights, sizes, and energy consumption figures converge [
3].
2.5. Selection of Primary Inputs
Figure 1 (left of Figure) depicts the role of battery sizing assumptions and battery design assumptions in the model. Battery sizing assumptions include parameters that determine necessary battery power and capacity, such as vehicle weight, energy efficiency, usable capacity, specific energy, mass of motor and power electronics, motor power, allowances for power and capacity fade, and similar factors. Battery design assumptions include factors such as cell capacity, pack topology, cells per module, thermal medium, electrode aspect ratio and coating thickness, and manufacturing volume. These assumptions are reviewed in detail here.
2.5.1. Inputs Influencing Battery Sizing
One important input to the battery sizing process is the usable SOC design window. Based on observation of existing vehicles, we chose 90 percent for BEV200 and 85 percent for other BEVs. For PHEVs, a smaller window was assigned to beginning-of-life (BOL) and a somewhat larger window to end-of-life (EOL). Battery capacity was specified using the BOL figure, which effectively provides a buffer that can be used as the vehicle ages. The BOL SOC window for PHEV20 was placed at approximately 65 percent while the EOL window was placed at 75 percent. For PHEV40, the BOL window was 67 percent and the EOL window 77 percent. These figures were chosen by engineering judgement and by considering their effect on the ability of the sizing method to reproduce battery capacities of production PHEVs [
3].
Another important input to the battery sizing process is the required power capability of the battery. Target battery power (10 s pulse) was set to 32 percent greater than the peak motor power, to account for losses in the motor (10%) and EOL power fade (20%). In the case of BEVs and many longer-range PHEVs, target capacity drove the design more than target power, such that the battery is sufficiently large that its natural power capability exceeds the target power. These batteries therefore would have enough power capability to support moderate levels of fast charging and provide a buffer against power fade [
3].
In the analysis, PHEV40 was assumed to operate as a range-extended electric vehicle, which meant that the motor and battery would be sized to provide all-electric operation in all driving situations, and hence the PHEV40 range is all-electric. The battery and motor for PHEV20 were sized for blended-operation where it was assumed the engine could assist the motor during the charge depletion phase. All PHEVs were configured with a single propulsion motor, in contrast to some production PHEV designs that split the total power rating between two motors. While we acknowledge that most PHEVs include a second motor used primarily as a generator, the analysis did not assign a separate weight to this component but considered it as part of the weight of the conventional powertrain [
3]. Although a PHEV application may allow some downsizing of the conventional portion of the powertrain, the analysis did not consider potential weight reductions from this source.
The derating factor also plays a role in determining battery size. The EPA range labeling rule allows manufacturers to determine the label range value either by applying a default 70 percent derating factor to a 2-cycle range test result, or to derive a custom derating factor by an optional process. According to EPA vehicle certification records for MY 2012–2016 BEVs, the vast majority of BEV models used the default 70 percent derating factor. The same data shows that Tesla Motors has elected the optional process for its BEV200+ vehicles resulting in a factor of nearly 80 percent for the standard Model S (60 to 90 kWh), and from 73 to 76 percent for higher-performance and all-wheel-drive versions of the S and X. We therefore adopted a derating factor of 75 percent for BEV200 and 70 percent for all other PEVs.
2.5.2. Inputs Influencing Battery Design
User inputs to BatPaC were chosen as follows. For performance, battery power and energy requirements were derived from the battery sizing analysis described previously. Other considerations were battery chemistry, cell and module layout, and production volumes. The pack voltages, electrode dimensions, cooling capacity, and cell capacities that were output from BatPaC were confirmed to ensure consistency with current and expected industry practice. Because the overall analysis accounted for warranty costs separately, the warranty costs computed by BatPaC were deducted from the output costs.
For chemistry, we selected NMC622 cathode for BEV and PHEV40 packs, and a blended cathode (25 percent NMC333 and 75 percent LMO, the BatPaC default value) for PHEV20 packs, both with graphite anode. These selections were based on the known characteristics of the chemistries and their representation in current and near-term production vehicles.
Pack topology was optimized by choosing values for cells per module and number of modules to target a preferred cell capacity (in Ampere-hours). Since the number of modules per pack must be a whole number, varying the number of cells per module allows the number of cells per pack and their capacities to be better targeted. The number of cells per module were varied between 20 and 36 as needed to achieve target pack voltages and maximum cell capacities [
2].
BEV cells were limited to a maximum capacity of 90 A-hr. Most were significantly smaller as only the larger BEV packs approached this limit. The BMW i3 94 Ah provides an example suggesting this can be an effective cell capacity in a BEV application. PHEV cells were limited to 60 A-hr. Electrode coating thickness was limited to 100 microns, which again was only approached by the largest BEV batteries. All packs were modeled with liquid glycol-water cooling. Pack voltages were limited to the approximate range of 300 V to 400 V. Electrode aspect ratio was 3:1, supported by recent developments in pack design that suggest a movement toward low-profile packs that are mounted in the floor. BatPaC computed costs for a range of manufacturing volumes from 50,000 to 450,000 packs per year.