Here is the math that keeps government contractors up at night: a competitive federal proposal costs between $30,000 and $100,000 to produce. For complex procurements, large IDIQs, full-and-open competitions, proposals requiring an oral presentation, that number can exceed $250,000. If you price too high, you spent that money for nothing. If you price too low, you win a contract you will lose money on. The margin between winning and losing on price is often less than 5%.
Despite the stakes, most pricing decisions in federal contracting are built on a thin foundation: the last contract the company worked on, a few data points from GovWin or Bloomberg Government, and the pricing analyst's gut feeling about what the agency will accept. We built Price-to-Win and Competitive Radar to replace that guesswork with 25 years of real award data.
Pricing corridors, not point estimates
The first thing we got right was understanding that price-to-win is not a single number. It is a distribution. When an agency buys IT managed services under NAICS 541512 through a best-value IDIQ, winning bids cluster within a specific range that varies by agency, contract size, and period of performance. What Delon does is build that distribution from actual award data — millions of contract actions across USAspending and FPDS, and show you where winning bids land for contracts that match the one you are pricing.
The corridor narrows as you add specificity. A pricing corridor for “all DoD IT services” is wide and only marginally useful. A corridor for “Army CECOM IT support services, NAICS 541512, competitive best-value, $10M-$25M range” is tight enough to anchor a real pricing strategy. Our models factor in contract type (FFP, T&M, cost-plus), evaluation method (LPTA vs. best-value), set-aside status, incumbent pricing, and agency-specific patterns.
You see the full distribution: the 25th, 50th, and 75th percentile of winning bids, along with outliers that won despite being outside the corridor. That matters because sometimes the outlier tells you more than the average. An agency that awarded above the 75th percentile three times in a row is telling you something about what they value.
Head-to-head competitive records
Price-to-Win gives you the range. Competitive Radar tells you who you are up against and what they do. For any opportunity you are pursuing, Delon identifies the likely competitors based on NAICS overlap, vehicle positions, past performance with the buying agency, and geographic presence. Then it pulls their track record: contracts won, contracts lost, pricing on similar work, and their win rate against you specifically.
This is not abstract market research. It is a head-to-head record. If you have competed against Booz Allen on four Army CECOM contracts in the last three years, Delon shows you all four: who won, at what price, and what the evaluation factors were. If you lost on price twice and lost on technical approach once, that pattern tells you exactly where to focus.
Incumbent pricing analysis
Recompetes are the bread and butter of federal contracting. Roughly 40% of contract dollars are awarded on contracts with a prior incumbent. When you bid on a recompete, the single most important data point is what the incumbent is charging. Delon pulls the incumbent's pricing from the current contract and their pricing on similar contracts elsewhere. If the incumbent is running a DoD help desk at $85/hour on the current contract but their rate on three similar contracts averages $78/hour, you know there is room to undercut on price.
We also flag when incumbents are likely to be vulnerable for non-price reasons: declining CPARS ratings, contract modifications that suggest scope issues, or protest history on the current vehicle. Pricing is one dimension. Understanding the full competitive picture is what separates a winning bid from an expensive guess.
How the models work
Price-to-Win is powered by ML models trained on the full USAspending award history, every contract action, modification, and final obligated amount going back to fiscal year 2001. The hard part is not the model. It is the entity resolution underneath it. Companies merge, get acquired, change names, and register different entities for different contracts. A raw USAspending query for “SAIC” misses half their contracts because they were awarded under Leidos, Engility, or one of a dozen subsidiary names.
We resolved entities across 25 years of procurement data, linking companies across DUNS/UEI transitions, mergers, acquisitions, and name changes. That entity graph is what makes the competitive intelligence accurate. When we tell you a competitor's pricing history, it includes all their corporate identities, not just their current SAM.gov registration.
What this looks like in practice
You are bidding on a DoD IT support contract, $15M over five years, best-value evaluation. You open the opportunity in Delon. The pricing corridor shows winning bids on comparable contracts cluster between $13.2M and $15.8M, with the median at $14.1M. Competitive Radar identifies four likely competitors based on vehicle position and past performance with the buying office. Two of them have historically priced below the corridor median. The incumbent is at $14.8M on the current contract and $13.9M on a similar contract at another agency.
Now you are making a pricing decision with real data. You know the range the agency expects. You know what your competitors have charged. You know where the incumbent sits. That is the difference between a calibrated bid and a coin flip.