Author: Dacian
Solidity uses fixed-point arithmetic so Division Before Multiplication can result in precision loss errors due to rounding. Numbers in solidity also need to be scaled into the same precision before being combined. Most Solidity developers are aware of these requirements so it is rare to find surface-level precision loss vulnerabilities, but for the discerning auditor it is very possible to find hidden precision loss vulnerabilities.
Hidden precision loss vulnerabilities can occur in modular smart contract projects where numbers are manipulated and passed between functions, contracts & libraries. Using a real-world example from Sherlock's recent USSD audit contest, this article will highlight techniques used by elite auditors to find & maximize hidden precision loss vulnerabilities.
Expand Function Calls & Variables in Equations
Decentralized Finance (DeFi) often features equations implemented into Solidity code. The eyes of novice auditors glaze over lines of code implementing mathematical equations, but experienced auditors use a specific technique for analyzing these: manually expanding the function calls & variables in an equation to expose hidden division before multiplication. Let's consider a simple example from USSDRebalancer.BuyUSSDSellCollateral():
function BuyUSSDSellCollateral(uint256 amountToBuy) internal {
CollateralInfo[] memory collateral = IUSSD(USSD).collateralList();
uint amountToBuyLeftUSD = amountToBuy * 1e12;
This code looks innocent enough; amountToBuy is passed as input then multiplied, what could go wrong? Using the technique of expanding out variables in equations, we find the source of amountToBuy:
function getSupplyProportion() public view returns (uint256, uint256) {
uint256 vol1 = IERC20Upgradeable(uniPool.token0()).balanceOf(address(uniPool));
uint256 vol2 = IERC20Upgradeable(uniPool.token1()).balanceOf(address(uniPool));
if (uniPool.token0() == USSD) {
return (vol1, vol2);
}
return (vol2, vol1);
}
function rebalance() override public {
uint256 ownval = getOwnValuation();
(uint256 USSDamount, uint256 DAIamount) = getSupplyProportion();
if (ownval < 1e6 - threshold) {
// @audit amountToBuy is the parameter of this call
BuyUSSDSellCollateral((USSDamount - DAIamount / 1e12)/2);
Then we expand the definition of amountToBuyLeftUSD using the definition of amountToBuy:
amountToBuyLeftUSD = amountToBuy * 1e12;
amountToBuyLeftUSD = (USSDamount - DAIamount / 1e12)/2) * 1e12;
Now the possible precision loss which was hidden behind function calls and variable definitions becomes apparent: amountToBuy which was previously divided by 2 is then multiplied again and the result is stored in amountToBuyLeftUSD, resulting in a potential precision loss due to division before multiplication.
How can we be certain that a precision loss occurs here, and once established, how can we maximize this finding?
Simplify Expanded Equations
Once we have the expanded equation, our next step is to simplify it to remove the division before multiplication and obtain a simplified "correct" form that we can test against:
amountToBuyLeftUSD = (USSDamount - DAIamount / 1e12)/2) * 1e12;
// @audit /2 * 1e12 can be rewritten as * 1e12 / 2,
// removes division before multiplication, solving
// precision loss
= (USSDamount - DAIamount / 1e12) * 1e12 / 2;
Create Contract With Original & Simplified Equations
Then we want to create a simple contract src/PrecisionLoss.sol that implements both the original & the simplified version of the equations:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract PrecisionLoss {
function ussdOriginalAmountToBuy(uint ussdAmount, uint daiAmount)
public pure returns (uint) {
// @audit /2 * 1e12 division before multiplication
// causes precision loss
return (ussdAmount - daiAmount / 1e12)/2 * 1e12;
}
function ussdSimplifiedAmountToBuy(uint ussdAmount, uint daiAmount)
public pure returns (uint) {
// @audit /2 * 1e12 can be rewritten as * 1e12 / 2,
// removes division before multiplication, solving precision
// loss
return (ussdAmount - daiAmount / 1e12) * 1e12 / 2;
}
}
Use Foundry Invariant Fuzz Test On Both Equations
Next we want to use Foundry's Invariant Fuzz Testing to:
detect if there actually is precision loss between the two equations,
if so, maximize/optimize the input parameters required to exploit it,
we especially want to hunt for a set of inputs where the original equation will equal 0 but the simplified equation will be greater than 0, as this is usually a more damaging form of precision loss.
Create The Fuzz Testing Handler
First we'll create a handler test/InvariantPrecisionLossHandler.sol. This will take as input the PrecisionLoss contract we've previously created and implement a fuzz testing function that will:
define the range of inputs we want to test,
call the original & simplified functions in the contract we've previously created,
contain some logic to optimize the findings for parameters we are interested in to maximize the finding:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {PrecisionLoss} from "../src/PrecisionLoss.sol";
import {console2} from "forge-std/console2.sol";
import {CommonBase} from "forge-std/Base.sol";
import {StdUtils} from "forge-std/StdUtils.sol";
contract InvariantPrecisionLossHandler is CommonBase, StdUtils {
// real contract being tested
PrecisionLoss internal _underlying;
// invariant variables, set to 1 as the invariant will
// be errorOutput != 0, so don't want it to fail immediately
uint public originalOutput = 1;
uint public simplifiedOutput = 1;
// optimized finding variables
uint public maxPrecisionLoss;
uint public mplUssdAmount;
uint public mplDaiAmount;
constructor(PrecisionLoss underlying) {
_underlying = underlying;
}
// function that will be called during invariant fuzz tests
function ussdAmountToBuy(uint uusdAmount, uint daiAmount) public {
// constrain inputs between $1 & $1B in their respective
// precision ranges
uusdAmount = bound(uusdAmount, 1e6 , 1000000000e6 );
daiAmount = bound(daiAmount , 1e18, 1000000000e18);
// requirement of the functions being tested
vm.assume(uusdAmount > daiAmount/1e12);
// run both original & simplified functions
originalOutput = _underlying.ussdOriginalAmountToBuy(uusdAmount, daiAmount);
simplifiedOutput = _underlying.ussdSimplifiedAmountToBuy(uusdAmount, daiAmount);
// find the difference in precision loss
uint precisionLoss = simplifiedOutput - originalOutput;
//
// if this run produced greater precision loss than all
// previous, or if the precision loss was the same AND
// originalOutput == 0 AND simplifiedOutput > 0, then save it
// & its inputs
//
// we are really interested in seeing if we can reach a state
// where originalOutput == 0 && simplifiedOutput > 0 as this
// is a more damaging form of precision loss
//
// could also optimize for lowest uusdAmount & daiAmount
// required to produce the precision loss.
//
if(precisionLoss > 0) {
if(precisionLoss > maxPrecisionLoss ||
(precisionLoss == maxPrecisionLoss
&& originalOutput == 0 && simplifiedOutput > 0)) {
maxPrecisionLoss = precisionLoss;
mplUssdAmount = uusdAmount;
mplDaiAmount = daiAmount;
console2.log("originalOutput : ", originalOutput);
console2.log("simplifiedOutput : ", simplifiedOutput);
console2.log("maxPrecisionLoss : ", maxPrecisionLoss);
console2.log("mplUssdAmount : ", mplUssdAmount);
console2.log("mplDaiAmount : ", mplDaiAmount);
}
}
}
}
Create The Invariant Fuzz Test
Secondly we'll create the actual test itself test/InvariantPrecisionLoss.t.sol which creates & sets up the handler and defines the invariant to be tested. Please note this is using v1.5.5 of forge-std; if this doesn't compile please update if you are using an older version as there were invariant-related breaking changes.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
import {PrecisionLoss} from "../src/PrecisionLoss.sol";
import {InvariantPrecisionLossHandler} from "./InvariantPrecisionLossHandler.sol";
import {console2} from "forge-std/console2.sol";
import {Test} from "forge-std/Test.sol";
contract InvariantPrecisionLossTest is Test {
// real contract
PrecisionLoss internal _underlying;
// handler which exposes real contract
InvariantPrecisionLossHandler internal _handler;
function setUp() public {
_underlying = new PrecisionLoss();
_handler = new InvariantPrecisionLossHandler(_underlying);
// invariant fuzz targets _handler contract
targetContract(address(_handler));
// functions to target during invariant tests
bytes4[] memory selectors = new bytes4[](1);
selectors[0] = _handler.ussdAmountToBuy.selector;
targetSelector(FuzzSelector({
addr: address(_handler),
selectors: selectors
}));
}
// invariant: original output not 0. We want to see if
// there is a set of inputs where the original equation
// originalOutput == 0 but the simplified equation > 0
// Setting this invariant makes foundry try to break it
// which dramatically increases the efficiency of the fuzz test
function invariant_originalOutputNotZero() public view {
assert(_handler.originalOutput() != 0);
}
}
Run Invariant Fuzz Test To Obtain Optimal Exploit Inputs
We can run this test: forge test --match-test invariant_originalOutputNotZero -vvv (using forge 0.2.0 a26edce 2023-05-25T00:04:00.488745146Z or later as there were breaking changes where --match became --match-test) which very quickly finds a set of inputs that:
create a precision loss between the original & simplified equations,
results in the original == 0 but the simplified > 0
Here are two sets of inputs from the fuzzing runs which achieve these goals:
originalOutput : 0
simplifiedOutput : 500000000000
maxPrecisionLoss : 500000000000
mplUssdAmount : 1000001
mplDaiAmount : 1000000000000000002
originalOutput : 0
simplifiedOutput : 500000000000
maxPrecisionLoss : 500000000000
mplUssdAmount : 1000000000000000
mplDaiAmount : 999999999999999999999999999
Improving The Simplified Equation
Let us now consider our simplified equation:
(ussdAmount - daiAmount / 1e12) * 1e12 / 2
There is still an initial division where daiAmount is divided by 1e12; this is required as ussdAmount has 6 decimal places while daiAmount has 18 decimal places, so they must be scaled into the same precision before being combined. However this can introduce another source of precision loss, since the result is then multiplied again.
Instead of scaling daiAmount down, another alternative is to scale ussdAmount up; let's pursue this approach and see if we can make even further improvements. Add this new function to src/PrecisionLoss.sol:
function ussdImprovedAmountToBuy(uint ussdAmount, uint daiAmount)
public pure returns (uint) {
// @audit 1e12 / 2 can be simplified to * 5e11
// = (ussdAmount - daiAmount / 1e12) * 5e11
// to remove / 1e12, multiply everything by 1e12 / 1e12
// = (1e12*ussdAmount - daiAmount) / 1e12 * 5e11
// finally / 1e12 * 5e11 can be rewritten as * 5e11 / 1e12
// = (1e12*ussdAmount - daiAmount) * 5e11 / 1e12
return (1e12*ussdAmount - daiAmount) * 5e11 / 1e12;
}
This improved equation now scales ussdAmount up, performs the subtraction, then performs multiplication and finally division; we have completely removed any division before multiplication.
Use StateLess Fuzz Test To Verify Improved Equation
To verify whether our improved equation is better than our simplified equation, we'll add this stateless fuzz test to test/InvariantPrecisionLoss.t.sol:
// stateless fuzz test to check if improved version retains
// more precision than the simplified version, and to
// compare all 3 versions (original, simplified, improved)
function testUssdImprovedAmountToBuy(uint uusdAmount, uint daiAmount) public {
// constrain inputs between $1 & $1B in their respective precision
// ranges
uusdAmount = bound(uusdAmount, 1e6 , 1000000000e6 );
daiAmount = bound(daiAmount , 1e18, 1000000000e18);
// requirement of the functions being tested
vm.assume(uusdAmount > daiAmount/1e12);
// run original, simplified & improved functions
uint originalOutput = _underlying.ussdOriginalAmountToBuy(uusdAmount, daiAmount);
uint simplifiedOutput = _underlying.ussdSimplifiedAmountToBuy(uusdAmount, daiAmount);
uint improvedOutput = _underlying.ussdImprovedAmountToBuy(uusdAmount, daiAmount);
console2.log("uusdAmount : ", uusdAmount);
console2.log("daiAmount : ", daiAmount);
console2.log("originalOutput : ", originalOutput);
console2.log("simplifiedOutput : ", simplifiedOutput);
console2.log("improvedOutput : ", improvedOutput);
// fail the test if the improved & simplified outputs don't match
assertEq(simplifiedOutput, improvedOutput);
}
Before running this test we want to add the following to foundry.toml to increase the amount of fuzz testing runs:
[fuzz]
runs = 100000
max_local_rejects = 999999999
max_test_rejects = 999999999
Then run the test: forge test --match-test testUssdImprovedAmountToBuy -vvv
After a few runs we can see that the improved version works even better than the simplified version, here are some run outputs:
uusdAmount : 1000001
daiAmount : 1000000000000000001
originalOutput : 0
simplifiedOutput : 500000000000
improvedOutput : 499999999999
uusdAmount : 999999999000005
daiAmount : 1000000000000000001
originalOutput : 499999999000002000000000000
simplifiedOutput : 499999999000002500000000000
improvedOutput : 499999999000002499999999999
uusdAmount : 999999999003061
daiAmount : 999999999000000000000001942
originalOutput : 1530000000000000
simplifiedOutput : 1530500000000000
improvedOutput : 1530499999999029
We have now verified that the improved form of our equation which completely removes all division before multiplication preserves even more precision than our initial simplified form.
Verifying Correctness Of Simplified Equations
Sometimes although there is division before multiplication no precision loss will occur. In such cases it is still preferable to replace the original implementation with the simplified version which is more efficient and easier to understand. The same approach previously outlined can be very helpful for developers in refactoring their equations into simplified forms while ensuring correctness via automated fuzz testing. Consider this equation from USSD.collateralFactor():
totalAssetsUSD +=
(((IERC20Upgradeable(collateral[i].token).balanceOf(
address(this)
) * 1e18) /
(10 **
IERC20MetadataUpgradeable(collateral[i].token)
.decimals())) *
collateral[i].oracle.getPriceUSD()) /
1e18;
One technique to use immediately with such equations is to rename the definitions to more easily see what is going on. We'll add 2 more functions to src/PrecisionLoss.sol to contain the original & simplified version of this equation:
function ussdOriginalTotalAssets(
uint balance, uint decimals, uint priceFiat)
public pure returns (uint) {
return (balance * 1e18 / (10**decimals)) * priceFiat / 1e18;
}
function ussdSimplifiedTotalAssets(
uint balance, uint decimals, uint priceFiat)
public pure returns (uint) {
// (balance * 1e18 / (10**decimals)) * priceFiat / 1e18;
// 1) multiplying and dividing by 1e18 cancel out:
// (balance / (10**decimals)) * priceFiat
// 2) change order of operations to do multiplication first
return balance * priceFiat / (10 ** decimals);
}
In this project balance can be either 18 or 8 decimal points, so we'll add a couple of simple stateless fuzz testing functions to test/InvariantPrecisionLoss.t.sol:
function testUssdTotalAssets18D(uint balance, uint priceFiat) public {
uint decimals = 18;
// constrain inputs between $1 & $1B in their respective precision ranges
balance = bound(balance , 1e18, 1000000000e18);
priceFiat = bound(priceFiat, 1e18, 1000000000e18);
uint originalOutput = _underlying.ussdOriginalTotalAssets(balance, decimals, priceFiat);
uint simplifiedOutput = _underlying.ussdSimplifiedTotalAssets(balance, decimals, priceFiat);
assertEq(originalOutput, simplifiedOutput);
}
function testUssdTotalAssets8D(uint balance, uint priceFiat) public {
uint decimals = 8;
// constrain inputs between $1 & $1B in their respective precision ranges
balance = bound(balance , 1e8, 1000000000e8);
priceFiat = bound(priceFiat, 1e18, 1000000000e18);
uint originalOutput = _underlying.ussdOriginalTotalAssets(balance, decimals, priceFiat);
uint simplifiedOutput = _underlying.ussdSimplifiedTotalAssets(balance, decimals, priceFiat);
assertEq(originalOutput, simplifiedOutput);
}
Then run the tests: forge test --match-test testUssdTotalAssets
And the stateless fuzz tests do a great job of verifying that our simplified equation produces the same output as the original version.