How to Win CCDC: Scoring
Most teams want to win, very few teams properly prioritize their activities to win. This article is meant to change that.
Scoring in CCDC is usually relatively clear cut, despite the ridiculous ranges the team packet gives you. In a typical Midwest, Mid-Atlantic, or Rocky Mountain State Qualifier or Invitational competition, scoring breaks down to 3 categories: Uptime, Inject Responses, and Red Team. The Team Packet gives score ranges which are a bit nonsensical, but this is done so the Chief Judges can make adjustments to score weights for each competition.
Scoring Breakdown
In Midwest, Mid-Atlantic and Rocky Mountain the State Qualifier or typical scoring breakdown looks something like this:
- Uptime ≈ 40%
- Inject Responses ≈ 40%
- Red Team ≈ 20%
However I want to note that these are subject to change at any time by the Chief Judge, but these values have rung generally true over my time of competing and volunteering.
Uptime Scoring
Uptime scoring is exactly what it says on the tin. It's measuring the relative performance of teams in terms of uptime percentage. Uptime scoring is determined by the Scoring Engine, which is going to check the services for specific information. Say for instance, a string in a webpage, the existence of a DNS record, a hash, perform actions on a given service or something else. This score is typically influenced by blue teams taking their scored services down, followed by red team taking scored services down. Any CCDC veteran will tell you that blue teams are their own worst enemy, and shoot themselves in the foot constantly. Especially earlier on in the season.
In any competition I have participated in either as red team or blue team, blue teams have always been their own worst enemy here. I have no idea how many times I've seen blue teams taking down their own services, then blame us for it. I'm guilty of doing this myself! There's been numerous times where a teammate rebooted a box without telling me, which made me think red team took it down. Even when we can't start exploitation, we will have teams report red team caused outages. Sorry Team X, I didn't take down all your scored services 10 minutes in. Talk to your firewall admin, they are probably rebooting. There is a reasonable chance I am in your DC by then though.
Inject Scoring
Inject Responses are where teams struggle the most. The best team in Minnesota in the 2025 competition season scored a 58% on their Inject score. Taking that as a percentage grade, that's still an F. Inject Responses are the biggest opportunity for improvement for most teams and sometimes even grant the most points in the competition. Fortunately, I have an article on Injects which should help provide some guidance. Ensuring that you have a solid Writer is critical, and some teams even use 2. If you want to know what a Writer is/does, I have an article on Team Dynamics which clarifies roles. Your Writer is your most important team member as they will be the greatest influence on the total points you have for the competition.
I am begging teams to prioritize inject responses significantly higher than they do. I can confidently say there is 1 team in the entire Midwest region that does a great job with their submissions, and that's Indiana Tech. When you have your practice sessions, please include soft skills as well. They are critical for the competition, and even moreso when you graduate. Take a communications or business writing class as an elective. It's by far the single most useful class I took in college. Teams, take your Writer seriously. They are the center of ~60% of your total points. Invest in good templates, have pre-canned responses in your inject responses and incident response reports that save them a bunch of time, so all they need to do is make minor tweaks. We also still have teams that will try to submit an incident report over an inject response and just.... no....
Red Team Scoring
The final main scoring pool is Red Team scoring. Despite accounting for the lowest percentage of points, teams often have a lot of questions about this.
Minnesota and Indiana
Fortunately, for Minnesota and Indiana, I have direct control over and can explain exactly how it works. Red team scoring is done based on the final impact of an attack path. We start with the following placeholder values based on what level of access we achieve:
- 25 points for information disclosure, or a low privilege user within an application context
- 50 points for an admin user within an application context, or a low privilege shell
- 100 points for an admin/root shell
Something to keep in mind here, the entire attack path is evaluated holistically. So if we compromise a webserver's application user account, privesc that to an application admin, then get a webshell as www-data, then privesc to root with pwnkit, that scoring is functionally equivalent to if we ran ZeroLogon then successfully DCSync your DC. How we got access doesn't matter, just the context at the end of the attack path.
I then look at incident response reports and award points back against the placeholder value based on the quality and relevance of the reports. Teams can earn back 100% of the points red team took away based on their Incident Response reports if their incident report is up to snuff. More on how this is determined later.
I then take the team with the highest number of placeholder points (the team that performed the worst against red team), and use that number as the denominator for the whole red team points pool. So for example if we have a team with 400 placeholder points (after grading incident reports) and we have 10,000 points in the Red Team pool, (10,000/400) = 25. That gets us what I call the "multiplier". I then take those placeholder points, multiply them by the multiplier, then subtract the result from 10,000 to obtain the final scores.
For an example, assume we have 2 teams. Team 1 has success with ZeroLogon, and their Splunk application admin user account has default creds. Team 2 gets their prestashop compromised and a webshell installed. The webshell runs as www-user. Team 1 has a placeholder score of 150, and Team 2 has a placeholder score of 50. The total Red Team point pool is 10,000. Neither turn in an incident report. So to calculate how they are scored we take 10,000/150=66.66. We then take 150*66.66=9999 and 50*66.66=3333. We then take 10,000-9999=1 and 10,000-3333=6667. Making Team 1's final score 1 (because of rounding), and Team 2's final score 6667.
This effectively means that the team which does the worst against the red team will earn a 0 on their red team score. However this means that red team points are given based on relative performance across teams. Something to keep in mind here, despite this being the largest section of scoring by word count, it's because I can directly control and explain how it works. This is the smallest possible point pool, and therefore I would highly suggest teams keep that in mind. Red team points are rarely the deciding factor in the competition.
Another note, the exact number of points can vary between competitions. This means that point totals are not directly comparable between competitions. If you are trying to determine how well you are doing, calculate relative percentages for each category. Finally, if a given competition has pre-seeded compromises (or baking, as Dr. Durkee puts it, or persistence as RedefiningReality puts it) points are not taken from pre-competition compromise, unless they are used as a staging ground to get additional access. For example if we were to use a C2 callback to then password spray internally and then move laterally to another service. The reason for this is because RedefiningReality and I didn't think it fair to penalize teams for something they literally have no way to prevent. That being said, if you find said compromises and eradicate them, you will still get points back on that as if you had remediated a compromise, which can theoretically put you above the point cap. However, the cap is a hard cap. So if the cap is 10k points and you end up with 12k, you'll have 10k reported to the Chief Judge for your final red team score.
The Rest of Midwest and Mid-Atlantic
Now, if you end up with RedefiningReality as your Red Team Lead, he has a completely different method of scoring. This stems from philosophical differences in the way we each view how red team scoring should work. RedefiningReality's scoring system is handled piecemeal. Meaning that each part of an attack path that's successfully exploited will have a set number of points removed based on the difficulty of remediation. So for example, a vulnerability that's easy to mitigate will take away more points, whereas something harder will take away fewer. RedefiningReality outlines the rubric for his competitions here. In that document, he has a link to a secondary rubric that outlines his criteria for scoring an incident report. I highly encourage teams to check these out!
Also a note, there have been times where I've taken on the role of Lead with short notice. If this is the case, I will announce it in the Unofficial CCDC Discord server, and use my criteria instead. I would expect RedefiningReality to do the same if he were to over for one of my competitions, so I want to make sure teams are aware of both scoring systems.
Incident Reports
As mentioned before, you can obtain all your Red Team points back from a given compromise if you submit a good incident report following it up. Note too that I say compromise. We don't care if you see nmap scans. Everything on the internet is being scanned all the time. We don't care if you send us a screenshot of NyanCat or Flappy Bird and say "we had an incident". We need actionable reports that clearly explain the impact on your environment, how it happened, and how you are preventing it from happening again. Once we receive an incident report, we triage and map them to specific red team activities. So for instance, if you see us compromising the domain administrator account and you write a report on said compromise, you can potentially get all your points back from that compromise assuming your report is good. So, how do we know if it's good? Well, this is the criteria I use in the competitions I run.
- What proof do you have that Red Team compromised your system?
- Show me the proof. I need screenshots or terminal logs.
- When did the attack happen?
- This needs a timestamp!
- How and when was the attack was detected?
- Again, timestamps!
- Can you explain what the attack was, and what impact it had on your system?
- This is where your ability to perform technical analysis of an exploit matters.
- How was the attack's impact remediated?
- Did you delete the malware, lock the account? How'd you kick us out?
- How are you preventing the attack from happening again in the future?
- The most important point. Kicking us out is pointless if you left the front door open.
Bullets 1-5 will each earn you 10% of your points back, and bullet 6 will grant you the remaining 50% back. Red team will re-attempt the exploit to verify that it's been addressed. If not, then we will not grant back the points associated with bullet 6.
Additionally, there may be pre-competition compromise in the environment. If you find the these, then you can submit them as incident reports as well to award points back. Finally, RedefiningReality and I are trying to increase the quality of Incident Reports. We have seen teams actually sending them in now, which is an improvement. However we have too many teams sending in low quality reports where it's stuff like "Hey we found this nmap scan" or "My box was NyanCatted and It's dead". Or in the worst case this year, a vibe generated ZeroLogn report FROM ANOTHER COMPETITION! This stuff that's scraping the bottom of the barrel like this may result in point deductions in the future. To be clear, if you put literally any technical analysis on an IR, you'll be fine. We just want to avoid slop and teams sending reports in for the sake of sending them in. They take up precious time for both parties, and we have had competitions this year with over 100 reports coming in, so that lost time adds up.
Regionals Scoring
Regionals Scoring is similar, but there are more vectors to earn points and some existing ones have tweaks. First off, you will likely have an uptime SLA for your scored services. Essentially, the way uptime normally works is that they check if your service is up. If it is, they score it. If not, then it isn't scored. Now with an SLA there's a temporal element. If your service has been down past a set amount of time then White Team will start subtracting points on your uptime score for those downed services. This can cause teams to go into negative points. A second note on uptime scoring, it might include more strict requirements from the scoring engine in order to get your service deemed judged as up. This may include credentials, functional PKI infrastructure, workflows or other checks. The team packet will give hints, but it's your job as blue team to figure out what the exact criteria is.
Regionals may also include an audit. If it does, your audit score is handled by a separate audit team. Auditors will come in and assess your environment against a given governance framework, often NIST 800-53. Your auditor is going to be looking for a policy that states your organization's intention to meet a control in the framework, followed by proof that you are following your policy. Additionally when talking to an auditor, treat it like you are in court. Give the absolute minimum necessary information to show you meet the required controls and nothing else. You may be penalized for oversharing. However, your team needs to be honest. If you don't meet a control be forward with it. You could face potential penalties for lying to an auditor in the competition. In the real world there are potentially massive legal and professional ramifications for you and your organization if you lie and get caught. Don't lie to auditors. It never ends well.
Additionally, regional competitions usually include an Orange Team. Orange Team are volunteers which act as users of your systems, and will start calling to complain when services don't work as expected. These folks will grade first on how their user experience is, then by the customer service they receive. Finally since the competition has a second day, there are more Injects. The first day of competition is usually slightly shorter, but the second is a full day, potentially longer. Additionally, you'll have about double the number of Injects, so you'll have a faster pace of Injects to respond to. Combine this with a larger infrastructure and other scoring avenues, you're going to have a lot more to worry about in the competition in a regional competition.
Finishing Up
There are a lot of ways to score points in CCDC, and some matter more than others (cough injects cough cough). Despite that, teams often lose focus in the heat of the moment. My hopes is that teams take injects more seriously, while also better understanding the way we score everything else. If you want to hang out with other current and former CCDC competitors, volunteers, and red teamers, come join us in the Unofficial CCDC Discord server.