FlareOn 2025 writeup

Challenge 1

Challenge1 was a python script; with a XOR of encoded bytes that were using bear coordinates; I just bruteforced it with the below script:

def GenerateFlagText(sum):
    key = sum >> 8
    encoded = b"\xd0\xc7\xdf\xdb\xd4\xd0\xd4\xdc\xe3\xdb\xd1\xcd\x9f\xb5\xa7\xa7\xa0\xac\xa3\xb4\x88\xaf\xa6\xaa\xbe\xa8\xe3\xa0\xbe\xff\xb1\xbc\xb9"
    plaintext = []
    for i in range(len(encoded)):
        char_value = encoded[i] ^ (key + i)
        # Check if the character is ASCII (0-127)
        if char_value > 127:
            return None
        plaintext.append(chr(char_value))
    return ''.join(plaintext)

# brute force bear_sum
for test_sum in range(1, 200000):
    flag = GenerateFlagText(test_sum)
    if flag is not None and ("flare" in flag.lower() or "flag" in flag.lower()):
        print(test_sum, flag)

Flag was drilling_for_teddies@flare-on.com ; Interesting to note is that multiple coords sum work.

Challenge2

This one was fun! We have a marshalled + zlib +baseN encoded bytestring that runs some authentication;

I used dis.dis to disassemble the code object in an interpreter as well as the nested one to figure out what was going on. Here is a paste of the bytecode for the second nested marshalled object:

>>> dis.dis(marshal.loads(final))
  0           0 RESUME                   0

  2           2 LOAD_CONST               0 (0)
              4 LOAD_CONST               1 (None)
              6 IMPORT_NAME              0 (os)
              8 STORE_NAME               0 (os)

  3          10 LOAD_CONST               0 (0)
             12 LOAD_CONST               1 (None)
             14 IMPORT_NAME              1 (sys)
             16 STORE_NAME               1 (sys)

  4          18 LOAD_CONST               0 (0)
             20 LOAD_CONST               1 (None)
             22 IMPORT_NAME              2 (emoji)
             24 STORE_NAME               2 (emoji)

  5          26 LOAD_CONST               0 (0)
             28 LOAD_CONST               1 (None)
             30 IMPORT_NAME              3 (random)
             32 STORE_NAME               3 (random)

  6          34 LOAD_CONST               0 (0)
             36 LOAD_CONST               1 (None)
             38 IMPORT_NAME              4 (asyncio)
             40 STORE_NAME               4 (asyncio)

  7          42 LOAD_CONST               0 (0)
             44 LOAD_CONST               1 (None)
             46 IMPORT_NAME              5 (cowsay)
             48 STORE_NAME               5 (cowsay)

  8          50 LOAD_CONST               0 (0)
             52 LOAD_CONST               1 (None)
             54 IMPORT_NAME              6 (pyjokes)
             56 STORE_NAME               6 (pyjokes)

  9          58 LOAD_CONST               0 (0)
             60 LOAD_CONST               1 (None)
             62 IMPORT_NAME              7 (art)
             64 STORE_NAME               7 (art)

 10          66 LOAD_CONST               0 (0)
             68 LOAD_CONST               2 (('ARC4',))
             70 IMPORT_NAME              8 (arc4)
             72 IMPORT_FROM              9 (ARC4)
             74 STORE_NAME               9 (ARC4)
             76 POP_TOP

 15          78 LOAD_CONST               3 (<code object activate_catalyst at 0x0000027F7C53CC60, file "<catalyst_core>", line 15>)
             80 MAKE_FUNCTION            0
             82 STORE_NAME              10 (activate_catalyst)

 54          84 PUSH_NULL
             86 LOAD_NAME                4 (asyncio)
             88 LOAD_ATTR               22 (run)
            108 PUSH_NULL
            110 LOAD_NAME               10 (activate_catalyst)
            112 CALL                     0
            120 CALL                     1
            128 POP_TOP
            130 RETURN_CONST             1 (None)

Disassembly of <code object activate_catalyst at 0x0000027F7C53CC60, file "<catalyst_core>", line 15>:
 15           0 RETURN_GENERATOR
              2 POP_TOP
              4 RESUME                   0

 16           6 LOAD_CONST               1 (b'm\x1b@I\x1dAoe@\x07ZF[BL\rN\n\x0cS')
              8 STORE_FAST               0 (LEAD_RESEARCHER_SIGNATURE)

 17          10 LOAD_CONST               2 (b'r2b-\r\x9e\xf2\x1fp\x185\x82\xcf\xfc\x90\x14\xf1O\xad#]\xf3\xe2\xc0L\xd0\xc1e\x0c\xea\xec\xae\x11b\xa7\x8c\xaa!\xa1\x9d\xc2\x90')
             12 STORE_FAST               1 (ENCRYPTED_CHIMERA_FORMULA)

 19          14 LOAD_GLOBAL              1 (NULL + print)
             24 LOAD_CONST               3 ('--- Catalyst Serum Injected ---')
             26 CALL                     1
             34 POP_TOP

 20          36 LOAD_GLOBAL              1 (NULL + print)
             46 LOAD_CONST               4 ("Verifying Lead Researcher's credentials via biometric scan...")
             48 CALL                     1
             56 POP_TOP

 22          58 LOAD_GLOBAL              3 (NULL + os)
             68 LOAD_ATTR                4 (getlogin)
             88 CALL                     0
             96 LOAD_ATTR                7 (NULL|self + encode)
            116 CALL                     0
            124 STORE_FAST               2 (current_user)

 25         126 LOAD_GLOBAL              9 (NULL + bytes)
            136 LOAD_CONST               5 (<code object <genexpr> at 0x0000027F7C410B30, file "<catalyst_core>", line 25>)
            138 MAKE_FUNCTION            0
            140 LOAD_GLOBAL             11 (NULL + enumerate)
            150 LOAD_FAST                2 (current_user)
            152 CALL                     1
            160 GET_ITER
            162 CALL                     0
            170 CALL                     1
            178 STORE_FAST               3 (user_signature)

 27         180 LOAD_GLOBAL             13 (NULL + asyncio)
            190 LOAD_ATTR               14 (sleep)
            210 LOAD_CONST               6 (0.01)
            212 CALL                     1
            220 GET_AWAITABLE            0
            222 LOAD_CONST               0 (None)
        >>  224 SEND                     3 (to 234)
            228 YIELD_VALUE              2
            230 RESUME                   3
            232 JUMP_BACKWARD_NO_INTERRUPT     5 (to 224)
        >>  234 END_SEND
            236 POP_TOP

 29         238 LOAD_CONST               7 ('pending')
            240 STORE_FAST               4 (status)

 30         242 LOAD_FAST                4 (status)

 31         244 LOAD_CONST               7 ('pending')
            246 COMPARE_OP              40 (==)
            250 EXTENDED_ARG             1
            252 POP_JUMP_IF_FALSE      294 (to 842)

 32         254 LOAD_FAST                3 (user_signature)
            256 LOAD_FAST                0 (LEAD_RESEARCHER_SIGNATURE)
            258 COMPARE_OP              40 (==)
            262 POP_JUMP_IF_FALSE      112 (to 488)

 33         264 LOAD_GLOBAL             17 (NULL + art)
            274 LOAD_ATTR               18 (tprint)
            294 LOAD_CONST               8 ('AUTHENTICATION   SUCCESS')
            296 LOAD_CONST               9 ('small')
            298 KW_NAMES                10 (('font',))
            300 CALL                     2
            308 POP_TOP

 34         310 LOAD_GLOBAL              1 (NULL + print)
            320 LOAD_CONST              11 ('Biometric scan MATCH. Identity confirmed as Lead Researcher.')
            322 CALL                     1
            330 POP_TOP

 35         332 LOAD_GLOBAL              1 (NULL + print)
            342 LOAD_CONST              12 ('Finalizing Project Chimera...')
            344 CALL                     1
            352 POP_TOP

 37         354 LOAD_GLOBAL             21 (NULL + ARC4)
            364 LOAD_FAST                2 (current_user)
            366 CALL                     1
            374 STORE_FAST               5 (arc4_decipher)

 38         376 LOAD_FAST                5 (arc4_decipher)
            378 LOAD_ATTR               23 (NULL|self + decrypt)
            398 LOAD_FAST                1 (ENCRYPTED_CHIMERA_FORMULA)
            400 CALL                     1
            408 LOAD_ATTR               25 (NULL|self + decode)
            428 CALL                     0
            436 STORE_FAST               6 (decrypted_formula)

 41         438 LOAD_GLOBAL             27 (NULL + cowsay)
            448 LOAD_ATTR               28 (cow)
            468 LOAD_CONST              13 ('I am alive! The secret formula is:\n')
            470 LOAD_FAST                6 (decrypted_formula)
            472 BINARY_OP                0 (+)
            476 CALL                     1
            484 POP_TOP
            486 RETURN_CONST             0 (None)

 43     >>  488 LOAD_GLOBAL             17 (NULL + art)
            498 LOAD_ATTR               18 (tprint)
            518 LOAD_CONST              14 ('AUTHENTICATION   FAILED')
            520 LOAD_CONST               9 ('small')
            522 KW_NAMES                10 (('font',))
            524 CALL                     2
            532 POP_TOP

 44         534 LOAD_GLOBAL              1 (NULL + print)
            544 LOAD_CONST              15 ('Impostor detected, my genius cannot be replicated!')
            546 CALL                     1
            554 POP_TOP

 45         556 LOAD_GLOBAL              1 (NULL + print)
            566 LOAD_CONST              16 ('The resulting specimen has developed an unexpected, and frankly useless, sense of humor.')
            568 CALL                     1
            576 POP_TOP

 47         578 LOAD_GLOBAL             31 (NULL + pyjokes)
            588 LOAD_ATTR               32 (get_joke)
            608 LOAD_CONST              17 ('en')
            610 LOAD_CONST              18 ('all')
            612 KW_NAMES                19 (('language', 'category'))
            614 CALL                     2
            622 STORE_FAST               7 (joke)

 48         624 LOAD_GLOBAL             26 (cowsay)
            634 LOAD_ATTR               34 (char_names)
            654 LOAD_CONST              20 (1)
            656 LOAD_CONST               0 (None)
            658 BINARY_SLICE
            660 STORE_FAST               8 (animals)

 49         662 LOAD_GLOBAL              1 (NULL + print)
            672 LOAD_GLOBAL             27 (NULL + cowsay)
            682 LOAD_ATTR               36 (get_output_string)
            702 LOAD_GLOBAL             39 (NULL + random)
            712 LOAD_ATTR               40 (choice)
            732 LOAD_FAST                8 (animals)
            734 CALL                     1
            742 LOAD_GLOBAL             31 (NULL + pyjokes)
            752 LOAD_ATTR               32 (get_joke)
            772 CALL                     0
            780 CALL                     2
            788 CALL                     1
            796 POP_TOP

 50         798 LOAD_GLOBAL             43 (NULL + sys)
            808 LOAD_ATTR               44 (exit)
            828 LOAD_CONST              20 (1)
            830 CALL                     1
            838 POP_TOP
            840 RETURN_CONST             0 (None)

 51     >>  842 NOP

 52         844 LOAD_GLOBAL              1 (NULL + print)
            854 LOAD_CONST              21 ('System error: Unknown experimental state.')
            856 CALL                     1
            864 POP_TOP
            866 RETURN_CONST             0 (None)

 27     >>  868 CLEANUP_THROW
            870 EXTENDED_ARG             1
            872 JUMP_BACKWARD          320 (to 234)
        >>  874 CALL_INTRINSIC_1         3 (INTRINSIC_STOPITERATION_ERROR)
            876 RERAISE                  1
ExceptionTable:
  4 to 226 -> 874 [0] lasti
  228 to 228 -> 868 [2]
  230 to 868 -> 874 [0] lasti

Disassembly of <code object <genexpr> at 0x0000027F7C410B30, file "<catalyst_core>", line 25>:
 25           0 RETURN_GENERATOR
              2 POP_TOP
              4 RESUME                   0
              6 LOAD_FAST                0 (.0)
        >>    8 FOR_ITER                15 (to 42)
             12 UNPACK_SEQUENCE          2
             16 STORE_FAST               1 (i)
             18 STORE_FAST               2 (c)
             20 LOAD_FAST                2 (c)
             22 LOAD_FAST                1 (i)
             24 LOAD_CONST               0 (42)
             26 BINARY_OP                0 (+)
             30 BINARY_OP               12 (^)
             34 YIELD_VALUE              1
             36 RESUME                   1
             38 POP_TOP
             40 JUMP_BACKWARD           17 (to 8)
        >>   42 END_FOR
             44 RETURN_CONST             1 (None)
        >>   46 CALL_INTRINSIC_1         3 (INTRINSIC_STOPITERATION_ERROR)
             48 RERAISE                  1
ExceptionTable:
  4 to 44 -> 46 [0] lasti

We see we have some kind of xor key+42 on the researcher signature for the username to get authenticated. I wrote a bruteforce ascii script:

printable_chars = []
for i, sig_byte in enumerate(LEAD_RESEARCHER_SIGNATURE):
    char_val = sig_byte ^ (i + 42)
    if 32 <= char_val <= 126:
        printable_chars.append(chr(char_val))
    else:
        printable_chars.append('?')

printable_name = ''.join(printable_chars)
print(f"Printable username: '{printable_name}'")

And we get:

G0ld3n_Tr4nsmut4t10n as a password

Nice! Next step is to mock our os.getlogin() with that username (I actually ran the bf script again): I used an llm to put the pieces together into a final script:


# ================================================================= #
# ==           PROJECT CHIMERA - Dr. Alistair Khem's Journal     == #
# ==                  -- EYES ONLY --                            == #
# ================================================================= #
#
# Journal Entry 734:
#
# Success is within my grasp! After years of research, I have finally
# synthesized the two key components. The first, my 'Genetic Sequencer,'
# is stable and ready. It's designed to read and execute the final,
# most crucial part of my experiment: the 'Catalyst Serum.'
#
# The Catalyst is the key to creating a true digital lifeform.
# However, it is keyed to my specific biometric signature to prevent
# my research from falling into the wrong hands. Only I, the Lead
# Researcher, can successfully run this final protocol.
#
# If anyone else finds this, I urge you: DO NOT RUN THIS SCRIPT.
# The results could be... unpredictable.
#
# - Dr. A. Khem
#
import zlib
import marshal
import dis
import sys
import py_compile
import importlib.util
import os

# These are my encrypted instructions for the Sequencer.
encrypted_sequencer_data = b'x\x9cm\x96K\xcf\xe2\xe6\x15\xc7\xfd\xcedf\x92\xe6\xd2J\x93\xceTI\x9b\x8c\x05&\x18\xe4\t\x06\x03/\xc2\xdc1w\xcc\x1dl/\x00_\x01\xe3\x1b6\xc6\xe6\xfa\x15\x9a\xae\xd2\xae\xba\xae\xd2/Q\xf5\x0b\xbc\xd1\xa4JJVUV\xdd\xa5\xca\xae\xab\xf2\xceM\x89\x9ag\xe1\xf3\x9cs~\xe7\xfc\x8f\x1f\xc9\xd6\xf3\x1d\xf0\xa3u\xef\xa5\xfd\xe1\xce\x15\x00|\x0e\x08\x80p\xa5\x00\xcc\x0b{\xc5\\=\xb7w\x98;\xcf\xed]\xe6\xaep\x87y\xe3\x0e \xde\x13\xee~q\xf5\xa2\xf0\nx\xee\xbf\xf1\x13\x1f\x90\xdf\x01\xfeo\x89\xaf\x19\xe6\xc1\x85\xb9\x92\x7f\xf53\xcc\x83\xd7\xcc[\x17\xe6\x8e\xfc\xfe\xcf0o\xbdf\xde~\xae}\xef\'\xdaw\xe5\xdf\xfcL\xcd-\xf9\xee\x17/\xbd/\xee\xbc\xac\x7f\xef\x12}\xefU\xf4\n\xd8^\xc1\xf7\xff}\xbb%\xad\xbf\xbe\t\x00\xbc\xf7 \x06[\xe9\xb8\x0f\x89MU\xb0\xbbc\x97\'E!\x0ea<\t\xfa\xc7\x81aG\xf3\xac\x88\xca\xe1\xe0\x12a\xce\x1b\x18\xa5v\xce59:\x85\xd5Y\xb5)G\xac\x92\xbc\xdbB8Y\xeb\\cc\xeff%\xf6\xcb>\xb5\x10\xdc\xce\x15"\x16\x8f\xcb\xc85\\\xc2\xb4b\xfa\x94\xc1\xcb\xabF\x0c\xd3\x95M\xde\xf2r\x0c\xb6_\x11\xc9\xfd!ed\x9bX\x8e\x13\xb9q ]\xd8U\r\xb361\x0bT\x83B\xb3K8\x8ay+\x95AC\xab\x8a\xd16\xa2\xc0\xb9\xb9\x0c\x06b\xce\xbexR \xaa\xe9\x14\xdb\xb6G.\xd2sj\\$\xf7\xabh\xe7\x10EF+\x08\xcd*y\xf7x<lH\xd48\r\xaa\xd7s84\xf0i=4R\x9c\x1d\xdd\xeb\xfa\x98@\xfc+\xaf\x11:b\xa0\xb2E u\x1f\xaa\x08\xe9q0\x12\xc0[\xfb\x80\x15\xaa#\xca\xf2p\xcc7*\xa3z\xcd\x11;&\xb9\x8b\xee\xa1\x12\x92\xcc\x12\x93\xbd\x10\xac\xaa}%\x8e\xe8q\xdf\xb1\xb5\x87l\x8e\x85\x1d\xb4\xdb\x08\x0cr]*\x10O\xac\x83!|\x9c\xcf\xecT\xa5U\xa4\x12\x870\xb73&\xbb\xb5#o\'}\xa1\xce\xc1($\xb61\x01\xa1\xd6\x8b\x10=\x93\x97\x13\xc8\x01\xc7\x10\xea\xdaMr\x831\xd7>\x7f` \xc6\'\xe3\x12\xb7E\xb5H2X\xc6\x87\xc5\x9c\xb4Z\x8c\xe7h:\x94M\x11\xcbE\x14l\x9eL\xd5\x82X\xc9\x9d\x06m\x97\r\x05\x92\xa5\x9d-\x18+R\xd1\xa2M<\x0b\xb6V\x9a\xc0\xc0]|3\xc7l\xdf\xccPU\x8dm\x8a\x0e\xd7\x0fuk\xdc6\xe3\x97\xd885\xf2\x98i\xa6\x83\r\x08\x9f}8)\x8cE\xd0\'D1\xa4QS\nM\x82\xc6\x10\xa9L\xdbTU3\x1cu\xab\x9fTf\xba\x96\x06\xf5\x8c\xdf[\xaf\xb0\x90\xba!\x15}\xc3$i\xb8\x18\x14c\xb6\x13T\xe9X\x83\xcc\x87\xe9\x84\x8f]r#\x83\xc9*\xf3To\x81\x83\xb5\xec\xfaP(_\xc7\x88),\x1b\xa0\x82\xb9\x04\xed\x9f\xc7\xb3^E\xc9a\xc7|B0\x1a\x01\x19\x16\x1b\xfb\xcd\x90\xe7\xb6M7:\xd9sh\x04&\xb3\x0e{\x12\x8d\xde5#\xe9\xbe\xe1\x84\xf6H\xcd\xc0,\x91\xcc\xc6 9\x05-\xa0Q>\x94\xea\xf4"\xa2#gC\xa7<\xb8Xp6\xde\\\x99f\xadZ\xd9\xab\xbe\x92\x9e+\xe7#\x9e\x10)%]\xf0$l:\x87\x84\'\xc2\x1f\xe1j#\xb6$6\xf3\xfc\xb6\xb6\xc9\xed\xf3\th\xb0\xa2B\xfdY\x00\t\xe6\x96\'r\xe4\xbb\x1cK>\xc3\xc6\x1c\x91\xb88\xe6\xae\xbb\x083y0\x86\xc5+#%76\xcb\xd8l#G\xe8\xb5\xa8GB\xbe\xc01\x19M$\xe3Z\xad\x14\x17\xe7\xf1\x8dLP\x8e\xe3\xb6G\xa3]1\x10\xc1\xab\x1b\xa6\xe7Q\xaa\r\xbf\x12\xc8\xd8\xde$Q^Hu\xa9Q4\x86\\\xc0\xa4\x1a[\x07\xcc\xb5OL\x7f\x8c\xf4R\x18\xb5\x8f\xa0\xeb\x95\x88\xb7\xd0\xa5S\xf6\xce\xf2\x8cf_\x8b\x1b6r\x8a%\xb1\x82k\xf2\x15t\xdf\x99\xed\x9b\xc9r?\x9a\xcd\x0b\xab5d\xed\xdde?Y\xdc\xb2\xf9%\xbcI\xf3}\xd3\x93\xa2\x9aY\xbe\x83\x0c\x19\xa6\x86\xb2\xbb\xf9\x1e-J\'\xc9\x91\xfc\xaa@/\'<Q\x98N=;S\xdc\x0cl\tE\xaa\xf1b\xa5\xber\x13|\xbc)f\x02\x0b\xd26\x13\x17-\x1d\xce\xa19\xb5\xc2\xd5\xc1\x98g\x89\x0b\xc1\x8eJ\xc9\xfa@1s|\xaa\x8b\\\x13\x12\xb1\xd1\xbc\xfd6\x94a\xb804E\x92N)\xcc\xc4\xf9Sg\x0ev\x06\x06\x94-\xc5\x05\x7f\'Y]g5%\x82.\x1c~L\x16\xfa}S\x0e\xb4F0GT\xd2yZ\xe9xiu1\xef\r\xc3\x9d\xa2k\x16\xac:\xd9\xd7\t\xd5"\x17\xd2)\x89T\x1b\xe5\xa0\xe2\xcd\x9e\xacf\x91\xd7\x88\n]\xe5d.\xd3@,G\x87\xd2$I\xc7B\x9dZt\x1anP~\x9f\xb7P\x92\x02#?\xaf\xc4\xd7\xd7\xa1D$\x91\xedT\x82\xe9$\xb8\xaccr\xb3\xbfhur\xc7]3+\xf4\x82\x8e\xba\xc42\xdd\xb5\xb5\xaaZ~rm3\xa6\x9fpd|\xe7R\xecP_[`\x0c?\x0e\xda\xd1\xb4F\x1a\xe8LZ\x8a\x16\xd6\x0f\xec\x84=\x1c\x9b#\xe5\x12\x96&{\x9d\xd6\xb1\x1bH\xa0{~\xba\x04SE\xa4x\xe4X\xd2\x8bJ\xf6\x904\x07\xc5MyA\x0f\xa9\x11\x9d\xafb\xd1\xd8^-\x94\xa7\xf6\xd2f$\x83\x84s\xb8\xbb\xe5R\xd6\x91\xdb\x12\xfe\xe2\x86\x91T\xa3\xbb\xdc\xe8X\xa19\x0b\x96\x02\x91\x02$\xc5<\x19u?\xcb\xf61\x1b)\xe3\'5\x7fr\xca\xd4,I\x0e\x9b\xa5\xa2\xec\x93\xa28\xbc*\xa3\x9e\xb8\xab\xd0B\x89\xe8L\xe4J\xd7\x0e\x88\xbe\xd2@\xed\xa05\xbcl\x1c1\xaf\xbb\xcanY\xa5\xe0w\xe1\x1eR\xaa\x12\xb3\x8e\x18\xac\xba\xb9n\xa3\xd6\xee\xaa\xd9"\xe5\xfa\xd6A|\x1em\x84Z\xdd\x1aN\xe0\xbcs\x8c)Z,#\xba\x8d\xca\xf6\x98\x98\x08\x04f\xec\xd0\xb8\xde\xf0\x9f\x88\xe9\x9e\x9d\x12\x88\xa6\xc73\xd3(l\x14\t\x83\xa4\xfdHl\xc8\xd62\x851^K\xf8\xcb$\x98Kj\xd3v\xbf]d\xf2DrD\xa6\xa3\xcb\x14\xabZS{\xbb\xc5]\x95\xa1\x85lkv\x08a{t\xe0\x0f\xa0\xedr\xa3\x9b\x9eGFT\x86eF\x1d\xe9\x14Kdd\xa4d\xa9\x8dqyS\xd5\xcc\xd9B\xd0\x9b\xe1\xa3\x89\xda\xbe#\x95\x0f\xae\x8ezy\x86\x90]\x8f6\xa6\x02\x98\xbd\xcao3\xe8\x8a\xf6b\xb8\xbck\xe6\xe7T\x0eN\xee\xda\x92\x1b\t\xb8\x03p8\xf2z\xa4\x12\xebk\x16ZR\xb72\xd4BPly\xcd\xb2]\'!\xd0\x198\x0e\xdamP+W\x08\xce\xb3\x0c\xd6\\\xfa\x10\x9e\xa7\x97\xd4\x9e\xdcC\xe0\xb4*m\xda\xd4\xa1\x97\x15A-\x17\xa9nO\x1e\xbe>4a\x88/\xb9{\x95\xee\x95\xe5\xc4\x1c\xadL:1QX\xce\xed\xf2\x12\x8e0\x89\xd9\xc8\x98\x9e\xd4\xda\xae\x1c\xc7\xd4\xb8\x1f\xac\x8du?\x18\x16\xc4\xa9\xda\xcaD\xaa\xc5\x1d?Lz\xbb\x9diV\xd2\x17tE\x91\xa1\xfd\xe5\x87\x9c\xf6,\xfa\x87zz\x83L\xe9\n\xdc\xee\xbb\x1e\xa9k\xfb\x0f\xd9\x9cU\xef{\xdac\x98\xd7X\xf0\x90\xb0\x06\xdb\x01\xd2\\\xe7\xdc\xf6\xb1\x99v\x0e\x05\x1e\xb5\xb0I\xbd\x9a\x98+Fx{\x18\xe4\x88\x9a\xb7\x10\xf6b\xady\xec\x94\xb5e\x04\xa4\x91\xe8\x9a\xd8V\xbd4T\'\n$f\xc7\x14<\x90\x91x\xa7;\x91\x8a\xe3CP\x90\x8b\xd5Z\xd4\x06\xd39\x1fJ&\x16ku\x8fGt\xc4\xd6\x92\x08|\x9d\x18{\x8cj[\xd8\x0f\x9d\xed\xae2AG\xad\xed\x8a\xf1V\xe0\xa5\x97\xa2\x8a\x88\xcb\x0fXi&s)\xd2\xb3\x00\x83-MC\xfa2\xc2\x13:\x17\xf4\x83\xfe|k\xc4\xa6K\xebB2\x8c\x16+{h\\\xad\xe8)\x1eJ\x9aI\xd9Z\x93ht\xd5\x9b\x0c\xc6\xa5T\x8e\xf3\xf2\xd1\xd6<:\xcaH4\x08\x8d7\x02%\x11\xe9(-\x81f\xa54\xc6\xd9\xd24\x1f\xe0\xc4@#\xe5/\x94\xfc\x10B\xe0\x19\x18\xe2B\xde|\r>HaF.C\xd5\x9e\x13d\xae)\xbe0\x95\x830g,\xf1x\x82\xa6F\xc4R`\x87q\xd5)O\x96\x8b\xd6\xe5S\xa3\xb7\xaa\xaf\xe0[\xb8~\xc2\xc8\xc5IO\xe6x`\xbbn\xce\xea\xaaI0,B"\xccb\xb9\r\xa3U\x06\xed\x8dS`3\x9c\xaf\xb5\xa8\xe8\xfa\x0eB\x10\xe4I\x81U\x16\x9c\xc9\xae\x17\xda\xecIY\xd4\xc4\xf5\x82\x7f\xd2\x13W\xb6\xa8\xf1\xa2\xf9\xe4B\xec>.\x8a\xbc.\xdc\xe6yv\xcd*[k\xfd\xa4H\xe6\x9eXk\x93\xd5\x84\xa7O\x9f\xee>\xeam\xb5\xf5\\\xb4\x16\xbb[\xa8\xf0\n\xea\x89\xa6\xad^\xf2\xf0/\xcf\xf79\xd6\x12c\xd8\xf9\x8d\xddE\xec\xfc@eMk\xce*\xe7{\xeb\xad!Z\xe7\xc7\x17-]\x10\x85\xc9\xab\xfe\x93\x17\xbd\xcf\xf7\x0cs\xa1\xad\xcfoq\xd7Q\xe1v\x06\xf1\xfc\x90\xd7U\xc3\x14-\xebG\xf4\xf9\x17\xb7\xc9\x17\xe1\xf3\xe3\x97\xbd\x95\x0b0{\xf1:\x93\xe7\x95\xf7\x14\x9d\x15\xac\xf3\xfb\xaf5n\xa3\x13\x9d\x93E~}~\xa7dk\xfcz\xa1k\xfd\xcb@\xe7\x073E\xe7X\xc5:\x7f\xf8\x1a^h\xb7\xdc\x05\x98H/\xc9\xbf\x00?\xdc^\xfb\xfe\xfb\x10\x7f%c\xbd:\xb5\xf4\xf9M\\\xd5\x05[\x11\xd3\xe6\xaf\x9f\xdf\x12\x01\xc0\xfa\xfd\xe5\xf1\xfd\xdd\xab\xab\xab\xef\x80w\xbf\x05\xde\xfe\x16x\xef[\xe0\x9d\xef\xef\x03\x1f\xd6<7\xc0\xe3\x7f\x01\xf7n\xee#_\x01O\xffy\xbb\xf9\xe4+\xc0\xff\xcd#\xdfg\xd2\xd7\x8f|_>\xf2\xdd|\x92~\xf6(s\x03<\xfc\xe6\x03\xf8\x8f\xde?\x7f\xfa\xa7Oo\x02\xa9g\x1f\xa4/u\xdf<\xf6~\xe6|~\xfc\xc3\xf1\x06\xc2\x9f=N\xdd\x00\xef?\xef\xe4\xfb\n\xf8\xe4\xd2\xfbc\xf4\x8f\xe2\xd7\x1f\x85\xbe\xfc(t\x83\x12\x7fs\xfe\xbe}\xf6Q\xe7\x06\xf8\xf0?\xf7\x81\xab\xdf\xfe\x03\xf8\x9d\xf9\xf02\xd3\xff\x00hw\x9dH'
import arc4
import art
import emoji
import cowsay
import pyjokes

# Mock os.getlogin to return the correct username for authentication
original_getlogin = os.getlogin


def mock_getlogin():
    # The signature we need to match (from the disassembly)
    target_signature = b'm\x1b@I\x1dAoe@\x07ZF[BL\rN\n\x0cS'

    # Decode the required username by reversing the XOR with the correct formula
    # The actual formula is: c ^ (i + 42)
    required_username = ''.join(chr(sig_byte ^ (i + 42)) for i, sig_byte in enumerate(target_signature))
    print(f"[DEBUG] Spoofing username to: '{required_username}'")
    return required_username


# Replace os.getlogin with our mock
os.getlogin = mock_getlogin

# Activate the Genetic Sequencer. From here, the process is automated.
sequencer_code = zlib.decompress(encrypted_sequencer_data)
code_object = marshal.loads(sequencer_code)

print(f"Current Python version: {sys.version}")
print(f"Code object type: {type(code_object)}")

# Create a proper .pyc file with correct magic number
magic = importlib.util.MAGIC_NUMBER
timestamp = b'\x00\x00\x00\x00'  # Dummy timestamp
size = b'\x00\x00\x00\x00'  # Dummy source size

with open('sequencer_output.pyc', 'wb') as f:
    f.write(magic)  # Python version magic number
    f.write(timestamp)  # Modification time
    f.write(size)  # Source file size (Python 3.3+)
    marshal.dump(code_object, f)

print(f"Created sequencer_output.pyc with magic number: {magic}")

# Show disassembly
print("\nCode object disassembly:")
try:
    dis.dis(code_object)
except Exception as e:
    print(f"Could not disassemble: {e}")

# Now try to execute the code object to get the flag
print("\nAttempting to execute code object:")
try:
    # Create a custom namespace to capture any outputs
    namespace = {}
    exec(code_object, namespace)

    # Look for any interesting variables or functions that were created
    print("\nNamespace contents:")
    for key, value in namespace.items():
        if not key.startswith('__'):
            print(f"{key}: {type(value)} = {repr(value)[:100]}...")

except Exception as e:
    print(f"Execution failed: {e}")
    import traceback

    traceback.print_exc()

# Try to manually decode the base64 data we can see in the disassembly
try:
    print("\nTrying to manually decode the embedded data...")
    # From the disassembly, we can see there's base64 encoded data
    # Let's try to extract and decode it directly from the code object
    encoded_data = code_object.co_consts[2]  # This should be the long base64 string
    print(f"Found encoded data: {encoded_data[:50]}...")

    compressed_data = base64.b85decode(encoded_data)
    final_data = zlib.decompress(compressed_data)
    inner_code = marshal.loads(final_data)

    print("Inner code object disassembly:")
    dis.dis(inner_code)

    # Try to execute the inner code
    print("\nExecuting inner code object:")
    exec(inner_code)

except Exception as e:
    print(f"Manual decode failed: {e}")
    import traceback

    traceback.print_exc()

And I got the flag!

Verifying Lead Researcher's credentials via biometric scan...
[DEBUG] Spoofing username to: 'G0ld3n_Tr4nsmut4t10n'
   _    _   _  _____  _  _  ___  _  _  _____  ___   ___    _    _____  ___   ___   _  _     ___  _   _   ___   ___  ___  ___  ___
  /_\  | | | ||_   _|| || || __|| \| ||_   _||_ _| / __|  /_\  |_   _||_ _| / _ \ | \| |   / __|| | | | / __| / __|| __|/ __|/ __|
 / _ \ | |_| |  | |  | __ || _| | .` |  | |   | | | (__  / _ \   | |   | | | (_) || .` |   \__ \| |_| || (__ | (__ | _| \__ \\__ \
/_/ \_\ \___/   |_|  |_||_||___||_|\_|  |_|  |___| \___|/_/ \_\  |_|  |___| \___/ |_|\_|   |___/ \___/  \___| \___||___||___/|___/


Biometric scan MATCH. Identity confirmed as Lead Researcher.
Finalizing Project Chimera...
  __________________________________________
 /                                          \
| I am alive! The secret formula is:         |
| Th3_Alch3m1sts_S3cr3t_F0rmul4@flare-on.com |
 \                                          /
  ==========================================
                                          \
                                           \
                                             ^__^
                                             (oo)\_______
                                             (__)\       )\/\
                                                 ||----w |
                                                 ||     ||

Challenge 3

So this one was a real pain. I must say I really did not enjoy it...

We get a broken pdf ; after a lot of back and forth, I manage to fix it (mostly comparing to a valid pdf, fixing missing endobj etc. With a more or less fixed pdf, I ran pdf-parser from Didier Stevens

C:\Users\Someone\Downloads\3_-_pretty_devilish_file>python3 DidierStevensSuite\pdf-parser.py "pretty_devilish_file (3).pdf" -o 4 -f
This program has not been tested with this version of Python (3.12.4)
Should you encounter problems, please use Python version 3.12.2
obj 4 0
 Type:
 Referencing: 5 0 R
 Contains stream

  <<
    /Filter /FlateDecode
    /Length 5 0 R
  >>

 b"q\n612 0 0 10 0 -10 cm\nBI\n/W 37\n/H 1\n/CS /G\n/BPC 8\n/F [/AHx /DCT]\nID\nffd8ffe000104a46494600010100000100010000ffdb00430001010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101ffc0000b080001002501011100ffc40017000100030000000000000000000000000006040708ffc400241000000209050100000000000000000000000702050608353776b6b7030436747577ffda0008010100003f00c54d3401dcbbfb9c38db8a7dd265a2159e9d945a086407383aabd52e5034c274e57179ef3bcdfca50f0af80aff00e986c64568c7ffd9\nEI Q \n\nq\nBT\n/ 140 Tf\n10 10 Td\n(Flare-On!)'\nET\nQ\n\nEI\nQ\n"

And we get a hex suite; we throw that in cybercheff and notice the JIFF header. I extracted that to a jpeg that was basically just pixels on a grey scale and that's where I really didn't like the challenge. I basically assumed this was junk and kept trying to fix the pdf for hours. Then it hit me (and mostly because I vaguely remember doing something like that in a previous ctf). This could be steganography; something of the sort each pixel is a byte/bit that we need to assemble back to ASCII... I asked chatGPT for a script to more or less do that; tried a bunch of different things and go figure! Each grayscale pixel corresponds to an ascii char and we get the flag!

from PIL import Image
import numpy as np
import base64

def decode_flag(image_path):
    img = Image.open(image_path).convert("L")
    arr = np.array(img)

    # If the image has height > 1, just take the first row
    row = arr[0]

    # --- Method 1: threshold to bits ---
    bits = (row < 128).astype(int)  # dark=1, light=0
    bitstring = "".join(map(str, bits))
    bytes_data = [
        int(bitstring[i:i+8], 2)
        for i in range(0, len(bitstring), 8)
        if len(bitstring[i:i+8]) == 8
    ]
    decoded_ascii = bytes(bytes_data).decode(errors="ignore")

    # Try base64 decoding if possible
    decoded_b64 = None
    try:
        decoded_b64 = base64.b64decode(decoded_ascii).decode(errors="ignore")
    except Exception:
        pass

    # --- Method 2: direct grayscale to ASCII ---
    ascii_direct = "".join(chr(v) for v in row if 32 <= v < 127)

    # Also try base64 decoding of that
    decoded_b64_direct = None
    try:
        decoded_b64_direct = base64.b64decode(ascii_direct).decode(errors="ignore")
    except Exception:
        pass

    return {
        "bits_to_ascii": decoded_ascii,
        "bits_to_ascii_b64": decoded_b64,
        "grayscale_ascii": ascii_direct,
        "grayscale_ascii_b64": decoded_b64_direct
    }

if __name__ == "__main__":
    results = decode_flag("extracted_at_0x0 (3).jpg")
    for k, v in results.items():
        print(f"\n=== {k} ===\n{v}\n")

=== grayscale_ascii === Puzzl1ng-D3vilish-F0rmat@flare-on.com

Challenge 4

Ok for this one; we get a broken PE file; Opened it in 010 editor and turns out the MZ header is missing; after patching it in a hex editor, we get a valid pe32 file.

Running it, we'll see the file basically copy itself a bunch of times in the same directory and incrementing it's file name. It werfaults at UnholyDragon-154.exe ; we quickly see in the file Manifest that it's compiled with TwinBasic which is basically similar to VB6 except it doesn't interpret but rather compiles to native assembly. At first I thought I was supposed to open a TwinBasic IDE and debug this somehow but nope... So I compiled a helloworld twinbasic, generated a flirt lib and diffd on functions we don't see in both binaries. That was a bit of a waste of time.

I then Xrefd CreateProcessW and GetModuleFileName and found a big subroutine that has the main logic for the binary; we see it check it's name and basically increment over and over. When debugging this in x32dbg I was hitting an exception and crashing the program. At first I thought this was anti-debug so I used Frida to instrument interesting Windows API calls. One thing then stood out: The parent process was reading exactly one byte from it's child and then writing 1 byte to it. At first I thought I was supposed to read all these bytes and make some Ascii string/flag out of these so I wrote the below Frida to log the reads and writes:

import frida
import sys
import re

# =============================================================================
# Embedded JavaScript script for File I/O monitoring
# =============================================================================
jscode = """
// fileio-log.js — minimal file R/W monitor

function dumpBuffer(ptr, size) {
    try {
        return ptr.readByteArray(size);
    } catch (e) {
        return null;
    }
}

function logHex(buf) {
    if (!buf) return "<null>";
    let bytes = new Uint8Array(buf);
    let hex = [];
    for (let i = 0; i < bytes.length && i < 64; i++) { // cap to 64B preview
        hex.push(("0" + bytes[i].toString(16)).slice(-2));
    }
    return hex.join(" ") + (bytes.length > 64 ? " ..." : "");
}

// Hook WriteFile
Interceptor.attach(Process.getModuleByName("kernel32.dll").getExportByName("WriteFile"), {
    onEnter: function (args) {
        this.hFile = args[0];
        this.lpBuffer = args[1];
        this.nBytes = args[2].toInt32();
    },
    onLeave: function (retval) {
        if (retval.toInt32() !== 0 && this.nBytes > 0) {
            let data = dumpBuffer(this.lpBuffer, this.nBytes);
            console.log("[WriteFile] size=" + this.nBytes + " data=" + logHex(data));
        }
    }
});

// Hook ReadFile
Interceptor.attach(Process.getModuleByName("kernel32.dll").getExportByName("ReadFile"), {
    onEnter: function (args) {
        this.hFile = args[0];
        this.lpBuffer = args[1];
        this.nBytesToRead = args[2].toInt32();
        this.lpNumberOfBytesRead = args[3];
    },
    onLeave: function (retval) {
        if (retval.toInt32() !== 0) {
            let readCount = this.lpNumberOfBytesRead.readU32();
            if (readCount > 0) {
                let data = dumpBuffer(this.lpBuffer, readCount);
                console.log("[ReadFile] size=" + readCount + " data=" + logHex(data));
            }
        }
    }
});

console.log("[*] File I/O hooks installed.");
"""

# =============================================================================
# Python script logic for spawning and hooking
# =============================================================================

def on_child_added(child):
    """
    Called when a new child process is detected.
    Hooks the child if its name matches the pattern.
    """
    print(f"[+] Detected new child process: {child.pid} - {child.path}")
    
    # Ensure child.path is a string before using re.search
    child_path = str(child.path)
    # More flexible pattern matching - check if it contains UnholyDragon and ends with .exe
    if re.search(r"UnholyDragon.*\.exe", child_path, re.IGNORECASE) or "UnholyDragon" in child_path:
        print(f"[+] Attaching to child: {child.pid} - {child.path}")
        try:
            session = device.attach(child.pid)
            session.enable_child_gating()  # Enable child gating for this child too
            load_script(session, child.pid)
        except Exception as e:
            print(f"[-] Failed to attach to child {child.pid}: {e}")
    else:
        print(f"[-] Child process {child.path} does not match the pattern. Ignoring.")
        # Must resume children that are ignored
        try:
            device.resume(child.pid)
        except Exception as e:
            print(f"[-] Failed to resume ignored child process {child.pid}: {e}")

def on_detached(reason):
    """
    Called when a script detaches from a process.
    """
    print(f"[*] Script detached from process. Reason: {reason}")

def load_script(session, pid=None):
    """
    Loads and runs the JavaScript script on a given session.
    """
    pid_str = f" {pid}" if pid else ""
    print(f"[*] Injecting script into process{pid_str}")
    try:
        script = session.create_script(jscode, runtime='v8')
        script.on('destroyed', on_detached)
        
        def on_message(message, data):
            """Message handler to forward script output to the console."""
            pid_prefix = f"[PROCESS {pid}]" if pid else "[PROCESS]"
            if message['type'] == 'send':
                print(f"{pid_prefix} {message['payload']}")
            elif message['type'] == 'error':
                print(f"{pid_prefix}[ERROR] {message['description']}")

        script.on('message', on_message)
        script.load()
    except Exception as e:
        error_pid = f" {pid}" if pid else ""
        print(f"[-] Failed to load script into process{error_pid}: {e}")

# Get local device and set up child tracking
try:
    device = frida.get_local_device()
    device.on("child-added", on_child_added)

    target_executable_path = "UnholyDragon_win32.exe"
    print(f"[*] Spawning parent process: {target_executable_path}")
    
    # Spawn the process and attach to it
    pid = device.spawn([target_executable_path])
    print(f"[+] Spawned process with PID: {pid}")
    
    # Attach to the spawned process
    session = device.attach(pid)
    session.enable_child_gating()
    load_script(session, pid)
    
    # Resume the spawned process
    device.resume(pid)
    
    print("[*] Waiting for child processes and file I/O activity. Press Ctrl+C to exit.")
    sys.stdin.read() # Wait until user input to exit
    
except frida.NotSupportedError:
    print("[!] Child gating is not supported on this platform, or you need to run as administrator.")
    sys.exit(1)
except Exception as e:
    print(f"[!] An error occurred: {e}")
    sys.exit(1)
finally:
    try:
        device.disable_child_gating()
    except:
        pass

Running the script, we get:

PS C:\Users\Someone\Downloads\4_-_UnholyDragon (1)> python3 .\Hookfrida.py
[*] Spawning parent process: UnholyDragon-1.exe
[+] Spawned process with PID: 35824
[*] Injecting script into process 35824
[*] File I/O hooks installed.
[*] Waiting for child processes and file I/O activity. Press Ctrl+C to exit.
[ReadFile] size=1 data=3d
[WriteFile] size=1 data=4b
[+] Detected new child process: 27644 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[+] Attaching to child: 27644 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[*] Injecting script into process 27644
[*] File I/O hooks installed.
[ReadFile] size=1 data=30
[WriteFile] size=1 data=d6
[+] Detected new child process: 35796 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[+] Attaching to child: 35796 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[*] Injecting script into process 35796
[*] File I/O hooks installed.
[*] Script detached from process. Reason: <_frida.Script object at 0x000001F3B80B3750>
[ReadFile] size=1 data=3c
[WriteFile] size=1 data=50
[+] Detected new child process: 24748 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[+] Attaching to child: 24748 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[*] Injecting script into process 24748
[*] File I/O hooks installed.
[*] Script detached from process. Reason: <_frida.Script object at 0x000001F3B80B3870>
[ReadFile] size=1 data=fb
[WriteFile] size=1 data=20
[+] Detected new child process: 43516 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[+] Attaching to child: 43516 - C:\Users\Someone\Downloads\4_-_UnholyDragon
[*] Injecting script into process 43516
[*] File I/O hooks installed.
[*] Script detached from process. Reason: <_frida.Script object at 0x000001F3B80B3AB0>
[ReadFile] size=1 data=f9
[WriteFile] size=1 data=98

That was super slow and the written bytes weren't encodable to text. That's when I got another idea; what if the binary was patching itself and slowly fixing itself/unpacking into something else? I took the original binary we get with the challenge (with the fixed MZ header), renamed it to it's original file name from the VersionInfo: UnholyDragon_win32.exe and ran it; When it hits UnholyDragon-140.exe we see some forms/window pop ups showing up but nothing in there. Seems like the theory I had was correct. At UnholyDragon-150.exe we get another broken exe; Again, we patch it's MZ header, run it and...

Nice! We get the flag! dr4g0n_d3n1al_of_s3rv1ce@flare-on.com

Challenge 5

Ok so this one was rough (and fun)! We get a x64 binary; ntfsm.exe ; try to input a password, it spawns itself as a subprocess a bunch of time and outputs "wrong!" to the console ( it also tells us the password needs to be 16 characters long). So the binary has a ton of jmp instructions which make it hard to analyze in ida. You can disable "agressively make thunks of jmp" or whatever the instruction name is in IDA to speed up analysis but it will probably still hang and you won't be able to graph or decompile functions with so many thuncks. I read on discord someone had a lot of success with cutter + ghidra so I thought I'd try that. We open the binary in cutter; look-up the string "wrong!" and xref it; That drops us in a huge function with a bunch of outcomes and a "correct!" string;

We also see a couple of interesting strings: "A strange game. The only winning move is not to play" "state" "input" "position" "transitions"

We also note that there's a bunch of large switch case/jmp tables and a large number of blocks checking if a variable equals an ascii character. Below is an example of such a case block:

0x140860241      rdtsc
0x140860243      shl     rdx, 0x20
0x140860247      or      rax, rdx
0x14086024a      mov     qword [arg_58d10h], rax
0x140860252      rdtsc
0x140860254      shl     rdx, 0x20
0x140860258      or      rax, rdx
0x14086025b      mov     qword [arg_58d18h], rax
0x140860263      mov     rax, qword [arg_58d10h]
0x14086026b      mov     rcx, qword [arg_58d18h]
0x140860273      sub     rcx, rax
0x140860276      mov     rax, rcx
0x140860279      cmp     rax, 0x12ad1659 -- some magic string
0x14086027f      jl      0x140860252
0x140860281      movzx   eax, byte [arg_28h]
0x140860286      mov     byte [arg_3bb84h], al
0x14086028d      cmp     byte [arg_3bb84h], 0x4a -- ascii comparison
0x140860295      je      0x1408602ce
0x140860297      cmp     byte [arg_3bb84h], 0x55 -- ascii comparison
0x14086029f      je      0x1408602ef
0x1408602a1      cmp     byte [arg_3bb84h], 0x69 -- ascii comparison
0x1408602a9      je      0x1408602ad
0x1408602ab      jmp     0x140860310
0x1408602ad      mov     qword [arg_58d28h], 1 -- transition to state 1
0x1408602b9      mov     rax, qword [arg_58ab0h]
0x1408602c1      inc     rax
0x1408602c4      mov     qword [arg_58ab0h], rax
0x1408602cc      jmp     0x14086033f
0x1408602ce      mov     qword [arg_58d28h], 2 -- transition to state 2
0x1408602da      mov     rax, qword [arg_58ab0h]
0x1408602e2      inc     rax
0x1408602e5      mov     qword [arg_58ab0h], rax
0x1408602ed      jmp     0x14086033f
0x1408602ef      mov     qword [arg_58d28h], 3 -- transition to state 3
0x1408602fb      mov     rax, qword [arg_58ab0h]
0x140860303      inc     rax
0x140860306      mov     qword [arg_58ab0h], rax
0x14086030e      jmp     0x14086033f
0x140860310      mov     dword [nShowCmd], 5 ; INT nShowCmd
0x140860318      mov     qword [lpDirectory], 0 ; LPCSTR lpDirectory
0x140860321      lea     r9, [data.141252940] ; 0x141252940 ; LPCSTR lpParameters
0x140860328      lea     r8, [data.141252a08] ; 0x141252a08 ; LPCSTR lpFile
0x14086032f      lea     rdx, [data.141252a94] ; 0x141252a94 ; LPCSTR lpOperation
0x140860336      xor     ecx, ecx  ; HWND hwnd
0x140860338      call    qword [ShellExecuteA] ; 0x14133b408 ; HINSTANCE ShellExecuteA(HWND hwnd, LPCSTR lpOperation, LPCSTR lpFile, LPCSTR lpParameters, LPCSTR lpDirectory, INT nShowCmd)
0x14086033e      nop
0x14086033f      jmp     0x140c685ee
0x140860344      rdtsc

Ok so these blocks do basically the below in pseudocode

if var1 == 0x69:
    goto transition_to_state1;
if var2 == 0x55:
    goto transition_to_state2;

Another thing I noticed was the state blocks almost all have the same structure highlighted below:

rdtsc
<somestuff>
rdtsc
<somestuff>
cmp     rax, 0x12ad1659 -- some magic string
<somestuff>
cmp     byte [arg_3bb84h], <someascii>
je < jumpto TransitionN>

This is important for later because it gives us some nice byte patterns to search to find all the states.

From the above, we know we're dealing with a finite state machine and each character in the password is a state transition. Running this while logging it in procmon shows the binary call CreateFile,WriteFile and ReadFile on NTFS Alternate Data streams. I worked on this challenge with carbon_xx whom I met on the OALabs discord and he wrote a usefull python script to monitor these data streams which was way better than what I had (Apimonitorx64) Below is the script:

import os, time, subprocess

exe = r"ntfsm.exe"
streams = ["position","state", "transitions"]

def read_ads(path, stream):
    try:
        with open(f"{path}:{stream}", "rb") as f:
            return f.read()
    except FileNotFoundError:
        return None

prev = {}

while True:
    for s in streams:
        data = read_ads(exe, s)
        if data is None:
            continue
        hexstr = " ".join(f"{b:02X}" for b in data)
        if prev.get(s) != hexstr:
            print(f"[{time.strftime('%H:%M:%S')}] Stream '{s}' changed:")
            print(f"  {hexstr}")
            prev[s] = hexstr
    time.sleep(0.1)

And it's output on an example run:

ntfsm.exe UP6aaaaaaaaaaaaa
[16:42:59] Stream 'input' changed:
  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[16:42:59] Stream 'position' changed:
  00 00 00 00 00 00 00 00
[16:42:59] Stream 'state' changed:
  FF FF FF FF FF FF FF FF
[16:42:59] Stream 'transitions' changed:
  00 00 00 00 00 00 00 00
[16:43:03] Stream 'input' changed:
  55 50 36 61 61 61 61 61 61 61 61 61 61 61 61 61
[16:43:03] Stream 'position' changed:
  01 00 00 00 00 00 00 00
[16:43:03] Stream 'state' changed:
  03 00 00 00 00 00 00 00
[16:43:03] Stream 'transitions' changed:
  01 00 00 00 00 00 00 00
[16:43:03] Stream 'position' changed:
  03 00 00 00 00 00 00 00
[16:43:03] Stream 'state' changed:
  0F 00 00 00 00 00 00 00
[16:43:03] Stream 'transitions' changed:
  03 00 00 00 00 00 00 00
[16:43:03] Stream 'position' changed:
  04 00 00 00 00 00 00 00
[16:43:03] Stream 'position' changed:
  06 00 00 00 00 00 00 00
[16:43:03] Stream 'position' changed:
  08 00 00 00 00 00 00 00
[16:43:04] Stream 'position' changed:
  0A 00 00 00 00 00 00 00
[16:43:04] Stream 'position' changed:
  0C 00 00 00 00 00 00 00
[16:43:04] Stream 'position' changed:
  0D 00 00 00 00 00 00 00
[16:43:04] Stream 'position' changed:
  0F 00 00 00 00 00 00 00
[16:43:05] Stream 'input' changed:
  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[16:43:05] Stream 'position' changed:
  00 00 00 00 00 00 00 00
[16:43:05] Stream 'state' changed:
  FF FF FF FF FF FF FF FF
[16:43:05] Stream 'transitions' changed:
  00 00 00 00 00 00 00 00

A couple of things that are interesting: position is incremented every time we check a password character

input doesn't really matter

state is changed if we give the right character to move from StateX to StateY

So my theory was that we'd need to find 16 transitions from State0 to StateN in order to find the password. Going back to our main function with the string "wrong!" which has the switch case table: I had some interesting findings using the jsdec decompiler and ghidra decompiler embedded into cutter. Both had slightly different info usefull to our analysis: JS dec: JSdec gave a nice overview of what was basically going on. It's not necessarily mandatory but helps.

Ghidra:

So Ghidra actually helped me quite a bit. At first, I was just trusting jsdec. I drew a mermaid diagram of states and transitions and got the following: I noted the following states (they have numbers but are not necessarily sorted in order):

This was a good start to get an idea of what the FSM looked like. However, we see we're missing most of the States as we've established before there's thousands of states and transitions. I initially wrote some python to find all states but then I noticed I was missing the most important thing: I knew StateN had x transititons that allowed it to move to some other state but in my final txt, I didn't have the mapping between the states I had parsed and the state they were transitioning to. In other words, my output looked something like this:

  "UnknownState43156": {
    "address": "0x1405f0007",
    "transitions": {
      "F": {
        "ascii": 70,
        "next_state": "0x10ecd"
      },
      "u": {
        "ascii": 117,
        "next_state": "0x10ecc"
      }
    }
  },

That's when I noticed in both the ghidra output and cutter disassembly, we had a comment telling us where the switch case jump table was:

0x14000ca5a      jmp     rcx       ; switch table (17 cases) at 0x140c687b8

Going to this address in a hexdump view, we see the following by:

00000000  41 02 86 00 fb 58 a1 00 57 4f 1f 00 4c 9e 75 00  |A...ûX¡.WO..L.u.|
00000010  3b 1a 3b 00 db a0 9f 00 dd 67 3c 00 6d 25 a7 00  |;.;.Û ..Ýg<.m%§.|
00000020  e9 a1 25 00 4a b4 81 00                          |é¡%.J´..|

That 0x140860241 address is the address of our first state and so on. That gave me the last piece of the puzzle. Putting these things together, I wrote the below script to dump out the full FSM (the script dumps other heuristics as I initially had other theories around how to find the path):

#!/usr/bin/env python3
"""
Finite State Machine Reverse Engineering Tool
Extracts state transitions from ntfsm.exe by analyzing assembly patterns
"""

import json
import pefile
from capstone import *
from capstone.x86 import *
import matplotlib.pyplot as plt
import networkx as nx
from collections import deque

def read_text_section(filename):
    """Read the entire .text section from PE file"""
    pe = pefile.PE(filename)
    
    print(f"[*] PE Image Base: 0x{pe.OPTIONAL_HEADER.ImageBase:x}")
    print(f"[*] PE Sections:")
    
    # Print all sections for debugging
    for section in pe.sections:
        section_start = pe.OPTIONAL_HEADER.ImageBase + section.VirtualAddress
        section_end = section_start + section.Misc_VirtualSize
        section_name = section.Name.decode('utf-8').rstrip('\x00')
        print(f"    {section_name}: 0x{section_start:x} - 0x{section_end:x} (size: 0x{section.Misc_VirtualSize:x})")
    
    # Find and read the .text section
    for section in pe.sections:
        section_name = section.Name.decode('utf-8').rstrip('\x00')
        if section_name == '.text':
            section_start = pe.OPTIONAL_HEADER.ImageBase + section.VirtualAddress
            # Read the entire section data directly
            data = section.get_data()
            print(f"[*] Read {len(data)} bytes from .text section")
            return data, section_start, pe.OPTIONAL_HEADER.ImageBase
    
    raise ValueError(".text section not found in PE file")

def find_pattern(data, pattern, text_section_start, search_start, search_end):
    """Find all occurrences of a byte pattern within address range"""
    # Remove spaces if present and convert hex string to bytes
    pattern_clean = pattern.replace(' ', '')
    pattern_bytes = bytes.fromhex(pattern_clean)
    print(f"[*] Looking for byte pattern: {' '.join(f'{b:02x}' for b in pattern_bytes)}")
    occurrences = []
    
    # Calculate offsets relative to the start of .text section
    # If search_end is less than search_start or less than text_section_start, search entire section
    if search_end <= search_start or search_end < text_section_start:
        print(f"[!] Invalid search range (0x{search_start:x} - 0x{search_end:x}), searching entire .text section")
        search_offset_start = 0
        search_offset_end = len(data)
    else:
        if search_start < text_section_start:
            search_offset_start = 0
        else:
            search_offset_start = search_start - text_section_start
        
        search_offset_end = min(search_end - text_section_start, len(data))
    
    print(f"[*] Searching in .text section from offset 0x{search_offset_start:x} to 0x{search_offset_end:x}")
    print(f"[*] Search range in virtual addresses: 0x{text_section_start + search_offset_start:x} to 0x{text_section_start + search_offset_end:x}")
    print(f"[*] Total data size: {len(data)} bytes")
    
    pos = search_offset_start
    while pos < search_offset_end:
        idx = data.find(pattern_bytes, pos, search_offset_end)
        if idx == -1:
            break
        # Convert back to virtual address
        occurrences.append(text_section_start + idx)
        pos = idx + 1
    
    return occurrences

def find_first_rdtsc_before_pattern(data, pattern_addr, text_section_start, md):
    """Find the first RDTSC instruction before the pattern match"""
    # Search backwards from pattern address
    # Start at most 100 bytes before the pattern
    search_start = max(0, pattern_addr - text_section_start - 100)
    search_end = pattern_addr - text_section_start
    
    # Disassemble backwards to find RDTSC
    rdtsc_addresses = []
    
    # We'll disassemble from search_start and look for RDTSC instructions
    offset = search_start
    while offset < search_end:
        code_slice = data[offset:offset + 15]
        disasm = list(md.disasm(code_slice, text_section_start + offset, count=1))
        
        if not disasm:
            offset += 1
            continue
            
        instr = disasm[0]
        
        # Check if this is RDTSC (opcode 0f31)
        if instr.bytes == b'\x0f\x31':
            rdtsc_addresses.append(instr.address)
        
        offset += instr.size
    
    # Return the second-to-last RDTSC found (the first of the pair)
    # There should be two RDTSC instructions before the pattern
    if len(rdtsc_addresses) >= 2:
        return rdtsc_addresses[-2]  # Second-to-last is the first RDTSC of the pair
    elif len(rdtsc_addresses) == 1:
        return rdtsc_addresses[0]
    
    return None

def disassemble_until_rdtsc(data, start_addr, text_section_start, md):
    """Disassemble instructions from start until RDTSC (0f31) is found"""
    instructions = []
    
    # Calculate offset in the data buffer
    offset = start_addr - text_section_start
    current_offset = offset
    max_instructions = 1000  # Safety limit
    
    for i in range(max_instructions):
        if current_offset >= len(data):
            break
            
        # Disassemble one instruction
        code_slice = data[current_offset:current_offset + 15]  # Max x86 instruction length
        disasm = list(md.disasm(code_slice, text_section_start + current_offset, count=1))
        
        if not disasm:
            break
            
        instr = disasm[0]
        instructions.append(instr)
        
        # Check if this is RDTSC (opcode 0f31)
        if instr.bytes == b'\x0f\x31':
            break
            
        current_offset += instr.size
    
    return instructions

def parse_jump_table(data, jump_table_addr, text_section_start, num_entries):
    """Parse the jump table at 0x140c687b8 to map state numbers to addresses"""
    offset = jump_table_addr - text_section_start
    
    state_map = {}  # Maps address -> state number
    
    # Each entry is 4 bytes (DWORD) representing a relative address
    for i in range(num_entries):
        entry_offset = offset + (i * 4)
        if entry_offset + 4 > len(data):
            break
        
        # Read 4 bytes as little-endian DWORD
        dword = int.from_bytes(data[entry_offset:entry_offset + 4], byteorder='little')
        
        # This is a relative address from the image base (0x140000000)
        # Add image base to get the full virtual address
        full_address = 0x140000000 + dword
        
        state_map[full_address] = i
    
    return state_map

def extract_transitions(instructions):
    """Extract state transitions from disassembled instructions"""
    transitions = {}
    cmp_targets = []  # List of (ascii_char, je_target_addr)
    
    # First pass: find all CMP instructions with printable ASCII characters
    for i, instr in enumerate(instructions):
        if instr.mnemonic == 'cmp':
            # Look for cmp with immediate byte value
            operands = instr.op_str
            # Check if comparing with a byte value (various formats possible)
            if ', 0x' in operands:
                parts = operands.rsplit(', 0x', 1)  # Split from the right to get the immediate value
                if len(parts) == 2:
                    try:
                        # Extract just the hex value (might have trailing characters)
                        hex_value = parts[1].split()[0].rstrip(',')
                        value = int(hex_value, 16)
                        # Check if it's a printable ASCII character (0x20-0x7E)
                        if 0x20 <= value <= 0x7E:
                            # Find the next JE instruction
                            for j in range(i + 1, min(i + 10, len(instructions))):
                                if instructions[j].mnemonic == 'je':
                                    # Extract jump target
                                    target_str = instructions[j].op_str.strip()
                                    try:
                                        target_addr = int(target_str, 16) if target_str.startswith('0x') else int(target_str, 0)
                                        cmp_targets.append((chr(value), value, target_addr))
                                    except ValueError:
                                        pass
                                    break
                    except (ValueError, IndexError):
                        continue
    
    # Second pass: for each JE target, find the MOV instruction that sets the next state
    for char, ascii_val, je_target in cmp_targets:
        # Find instructions at or near the JE target
        next_state = None
        
        for instr in instructions:
            # Check if this instruction is at or after the JE target
            if instr.address >= je_target and instr.address < je_target + 100:
                # Look for: mov qword [rsp + 0xXXXXX], 0xXXXX or mov qword ptr [rsp + 0xXXXXX], 0xXXXX
                if instr.mnemonic == 'mov' and 'qword' in instr.op_str and '[rsp' in instr.op_str:
                    parts = instr.op_str.split(',')
                    if len(parts) >= 2:
                        imm_str = parts[-1].strip().rstrip(',')
                        try:
                            if imm_str.startswith('0x'):
                                next_state = int(imm_str, 16)
                            else:
                                next_state = int(imm_str)
                            break
                        except ValueError:
                            continue
        
        if next_state is not None:
            transitions[char] = {
                'ascii': ascii_val,
                'next_state': hex(next_state)
            }
    
    return transitions

def create_state_machine_graph(state_machine, output_file='fsm_graph.png'):
    """Create a visual representation of the state machine"""
    print(f"\n[*] Creating state machine visualization...")
    
    # Create directed graph
    G = nx.DiGraph()
    
    # Add nodes and edges
    for state_name, state_data in state_machine.items():
        G.add_node(state_name)
        for char, info in state_data['transitions'].items():
            next_state_hex = info['next_state']
            # Find which state this next_state corresponds to
            target_state = None
            for s_name, s_data in state_machine.items():
                # Compare the hex values of next_state in transitions with state addresses
                if next_state_hex == s_data.get('state_id', None):
                    target_state = s_name
                    break
            
            if target_state:
                G.add_edge(state_name, target_state, label=char)
    
    # Create figure with high DPI
    plt.figure(figsize=(40, 30), dpi=300)
    
    # Use spring layout for better visualization
    pos = nx.spring_layout(G, k=2, iterations=50, seed=42)
    
    # Draw nodes
    nx.draw_networkx_nodes(G, pos, node_color='lightblue', 
                          node_size=500, alpha=0.9)
    
    # Draw edges
    nx.draw_networkx_edges(G, pos, edge_color='gray', 
                          arrows=True, arrowsize=10, 
                          arrowstyle='->', alpha=0.5)
    
    # Draw labels
    nx.draw_networkx_labels(G, pos, font_size=6, font_weight='bold')
    
    # Draw edge labels (transition characters)
    edge_labels = nx.get_edge_attributes(G, 'label')
    nx.draw_networkx_edge_labels(G, pos, edge_labels, font_size=4)
    
    plt.title("State Machine Transition Graph", fontsize=20)
    plt.axis('off')
    plt.tight_layout()
    plt.savefig(output_file, dpi=300, bbox_inches='tight')
    plt.close()
    
    print(f"[*] State machine graph saved to {output_file}")
    return G

def find_paths_of_length(state_machine, start_state, target_length=16):
    """Find all paths from start_state that are exactly target_length transitions long"""
    print(f"\n[*] Finding all paths of length {target_length} from {start_state}...")
    
    # Build adjacency list with transition characters
    adjacency = {}
    state_id_to_name = {}
    
    # First, create a mapping from state_id (hex value) to state name
    for state_name, state_data in state_machine.items():
        if 'state_id' in state_data:
            state_id_to_name[state_data['state_id']] = state_name
    
    # Build adjacency list
    for state_name, state_data in state_machine.items():
        adjacency[state_name] = []
        for char, info in state_data['transitions'].items():
            next_state_hex = info['next_state']
            if next_state_hex in state_id_to_name:
                target_state = state_id_to_name[next_state_hex]
                adjacency[state_name].append((target_state, char))
    
    # BFS to find all paths of exactly target_length
    paths = []
    queue = deque([(start_state, "", [])])  # (current_state, path_string, visited_states)
    
    while queue:
        current_state, path_string, visited = queue.popleft()
        
        # If we've reached the target length, save this path
        if len(path_string) == target_length:
            paths.append({
                'path': path_string,
                'states': visited + [current_state]
            })
            continue
        
        # If we've exceeded the target length, skip
        if len(path_string) > target_length:
            continue
        
        # Explore neighbors
        if current_state in adjacency:
            for next_state, char in adjacency[current_state]:
                queue.append((next_state, path_string + char, visited + [current_state]))
    
    return paths

def save_paths_to_file(paths, filename='paths_length_16.txt'):
    """Save all paths to a file"""
    with open(filename, 'w') as f:
        f.write(f"# All paths of length {len(paths[0]['path']) if paths else 0} from State0\n")
        f.write(f"# Total paths found: {len(paths)}\n\n")
        
        for i, path_info in enumerate(paths, 1):
            f.write(f"Path {i}:\n")
            f.write(f"  String: {path_info['path']}\n")
            f.write(f"  States: {' -> '.join(path_info['states'])}\n\n")
    
    print(f"[*] Saved {len(paths)} paths to {filename}")

def main():
    filename = 'ntfsm.exe'
    pattern = '483d5916ad12'
    # Fixed: your original range was 0x14000b00 - 0x140c7b00, but 0x14000b00 < 0x140001000 (text start)
    # and 0x140c7b00 is also less than the text start, so let's search the whole .text section
    search_start = 0x140001000  # Start of .text section
    search_end = 0x1410d1000    # Near end of .text section
    
    # Set to True to print full disassembly for debugging
    DEBUG_PRINT_DISASM = True
    
    print(f"[*] Loading PE file: {filename}")
    
    # Read the .text section
    try:
        data, text_section_start, image_base = read_text_section(filename)
        print(f"[*] .text section starts at: 0x{text_section_start:x}")
    except Exception as e:
        print(f"[!] Error loading PE file: {e}")
        return
    
    # Find all occurrences of the pattern
    print(f"[*] Searching for pattern: {pattern}")
    occurrences = find_pattern(data, pattern, text_section_start, search_start, search_end)
    print(f"[*] Found {len(occurrences)} occurrences")
    
    # Initialize Capstone disassembler for x86-64
    md = Cs(CS_ARCH_X86, CS_MODE_64)
    md.detail = True
    
    # Process each occurrence
    state_machine = {}
    states_without_exit_jmp = []  # Track states that don't have the exit jump
    address_to_unknown_state = {}  # Maps RDTSC address to UnknownStateN for later renaming
    
    for idx, addr in enumerate(occurrences):
        state_name = f"UnknownState{idx}"
        print(f"\n[*] Processing {state_name} at 0x{addr:x}")
        
        # Find the first RDTSC before the pattern
        first_rdtsc_addr = find_first_rdtsc_before_pattern(data, addr, text_section_start, md)
        
        if first_rdtsc_addr is None:
            print(f"    [!] Could not find RDTSC before pattern, skipping")
            continue
        
        print(f"    First RDTSC found at: 0x{first_rdtsc_addr:x}")
        address_to_unknown_state[first_rdtsc_addr] = state_name
        
        # Disassemble from the pattern until RDTSC
        instructions = disassemble_until_rdtsc(data, addr, text_section_start, md)
        print(f"    Disassembled {len(instructions)} instructions")
        
        # Debug: Print disassembly
        if DEBUG_PRINT_DISASM:
            print(f"\n    === Disassembly for {state_name} ===")
            for instr in instructions:
                print(f"    0x{instr.address:x}\t{instr.mnemonic}\t{instr.op_str}")
            print(f"    === End of disassembly ===\n")
        
        # Check if this block has the exit jump (e9 85 3f 00 00 or similar jmp to 0x140c685ee)
        has_exit_jmp = False
        
        for instr in instructions:
            # Check for jmp to 0x140c685ee (the exit address)
            if instr.mnemonic == 'jmp':
                # Parse the jump target
                target_str = instr.op_str.strip()
                try:
                    target_addr = int(target_str, 16) if target_str.startswith('0x') else int(target_str, 0)
                    if target_addr == 0x140c685ee:
                        has_exit_jmp = True
                        break
                except ValueError:
                    pass
        
        if not has_exit_jmp:
            states_without_exit_jmp.append({
                'state_name': state_name,
                'address': hex(first_rdtsc_addr)
            })
        
        # Extract transitions
        transitions = extract_transitions(instructions)
        
        if transitions:
            state_machine[state_name] = {
                'address': hex(first_rdtsc_addr),
                'transitions': transitions
            }
            print(f"    Found {len(transitions)} transitions: {list(transitions.keys())}")
        else:
            print(f"    No transitions found")
    
    # Parse the jump table to map addresses to state numbers
    print(f"\n[*] Parsing jump table at 0x140c687b8")
    jump_table_addr = 0x140c687b8
    num_states = len(state_machine)
    state_map = parse_jump_table(data, jump_table_addr, text_section_start, num_states * 2)  # Read extra entries to be safe
    
    print(f"[*] Found {len(state_map)} entries in jump table")
    
    # Rename UnknownStateN to StateX based on jump table
    renamed_state_machine = {}
    unknown_to_state = {}  # Maps UnknownStateN -> StateX
    
    for unknown_name, state_data in state_machine.items():
        addr = int(state_data['address'], 16)
        
        if addr in state_map:
            state_number = state_map[addr]
            new_name = f"State{state_number}"
            unknown_to_state[unknown_name] = new_name
            renamed_state_machine[new_name] = state_data
            print(f"[*] Renamed {unknown_name} -> {new_name} (address: {state_data['address']})")
        else:
            # Keep as UnknownState if not found in jump table
            renamed_state_machine[unknown_name] = state_data
            unknown_to_state[unknown_name] = unknown_name
            print(f"[!] Could not find {unknown_name} at {state_data['address']} in jump table")
    
    state_machine = renamed_state_machine
    
    # Add state_id to each state for easier lookups (the hex value that other states reference)
    # This is needed for graph creation and path finding
    for state_name, state_data in state_machine.items():
        # Extract the state number from the name (e.g., "State42" -> 42)
        if state_name.startswith('State') and state_name[5:].isdigit():
            state_num = int(state_name[5:])
            # The state_id is the hex value that represents this state in transitions
            # We need to find what hex value corresponds to this state
            # Looking at the transitions, they reference hex values like 0x4eb9
            # These should match up with state numbers in some way
            state_data['state_id'] = hex(state_num)
    
    # Update states_without_exit_jmp with renamed states
    for state_info in states_without_exit_jmp:
        old_name = state_info['state_name']
        if old_name in unknown_to_state:
            state_info['state_name'] = unknown_to_state[old_name]
    
    # Save to JSON file
    output_file = 'fsm_output.txt'
    with open(output_file, 'w') as f:
        json.dump(state_machine, f, indent=2)
    
    print(f"\n[*] State machine saved to {output_file}")
    print(f"[*] Total states found: {len(state_machine)}")
    
    # Collect unique next_states and ASCII characters with occurrence counts
    next_state_counts = {}
    unique_ascii_chars = set()
    
    for state_name, state_data in state_machine.items():
        for char, info in state_data['transitions'].items():
            next_state = info['next_state']
            next_state_counts[next_state] = next_state_counts.get(next_state, 0) + 1
            unique_ascii_chars.add(char)
    
    # Save unique next_states with occurrence counts to file (sorted by count)
    next_states_file = 'unique_next_states.txt'
    with open(next_states_file, 'w') as f:
        f.write("# Format: next_state : occurrence_count\n")
        for state, count in sorted(next_state_counts.items(), key=lambda x: x[1]):
            f.write(f"{state} : {count}\n")
    
    print(f"[*] Unique next_states with counts saved to {next_states_file} ({len(next_state_counts)} unique values)")
    
    # Print states with lowest occurrence counts (likely password path)
    print(f"\n[*] States with lowest occurrence counts (potential password path):")
    for state, count in sorted(next_state_counts.items(), key=lambda x: x[1])[:10]:
        print(f"    {state} : {count} occurrence(s)")
    
    # Save unique ASCII characters to file
    ascii_chars_file = 'unique_ascii_chars.txt'
    with open(ascii_chars_file, 'w') as f:
        for char in sorted(unique_ascii_chars):
            f.write(f"{char}\n")
    
    print(f"[*] Unique ASCII characters saved to {ascii_chars_file} ({len(unique_ascii_chars)} unique characters)")
    
    # Collect transition patterns (sorted ASCII characters for each state)
    transition_pattern_counts = {}
    
    for state_name, state_data in state_machine.items():
        # Get all transition characters for this state and sort them
        chars = ''.join(sorted(state_data['transitions'].keys()))
        transition_pattern_counts[chars] = transition_pattern_counts.get(chars, 0) + 1
    
    # Save transition patterns with occurrence counts to file (sorted by count)
    patterns_file = 'transition_patterns.txt'
    with open(patterns_file, 'w') as f:
        f.write("# Format: transition_pattern : occurrence_count\n")
        f.write("# Pattern shows all ASCII chars that lead to transitions from a state (sorted alphabetically)\n")
        for pattern, count in sorted(transition_pattern_counts.items(), key=lambda x: x[1]):
            f.write(f"{pattern} : {count}\n")
    
    print(f"[*] Transition patterns with counts saved to {patterns_file} ({len(transition_pattern_counts)} unique patterns)")
    
    # Print patterns with lowest occurrence counts (likely password path)
    print(f"\n[*] Transition patterns with lowest occurrence counts:")
    for pattern, count in sorted(transition_pattern_counts.items(), key=lambda x: x[1])[:10]:
        print(f"    '{pattern}' : {count} occurrence(s)")
    
    # Save states without exit jump to file
    no_exit_file = 'states_without_exit_jmp.txt'
    with open(no_exit_file, 'w') as f:
        f.write("# States that don't have jmp 0x140c685ee (exit jump)\n")
        f.write("# These states likely lead to success/password acceptance\n")
        for state_info in states_without_exit_jmp:
            f.write(f"{state_info['state_name']} @ {state_info['address']}\n")
    
    print(f"[*] States without exit jump saved to {no_exit_file} ({len(states_without_exit_jmp)} states)")
    
    if states_without_exit_jmp:
        print(f"\n[*] States WITHOUT exit jump (potential success states):")
        for state_info in states_without_exit_jmp[:10]:
            print(f"    {state_info['state_name']} @ {state_info['address']}")
    
    # Create state machine visualization
    try:
        G = create_state_machine_graph(state_machine, 'fsm_graph.png')
    except Exception as e:
        print(f"[!] Error creating graph visualization: {e}")
    
    # Find all paths of length 16 from State0
    if 'State0' in state_machine:
        try:
            paths = find_paths_of_length(state_machine, 'State0', target_length=16)
            if paths:
                save_paths_to_file(paths, 'paths_length_16.txt')
                print(f"\n[*] Found {len(paths)} paths of length 16 from State0")
                
                # Print first few paths as examples
                print(f"\n[*] Example paths (first 5):")
                for i, path_info in enumerate(paths[:5], 1):
                    print(f"    Path {i}: {path_info['path']}")
            else:
                print(f"\n[!] No paths of length 16 found from State0")
        except Exception as e:
            print(f"[!] Error finding paths: {e}")
    else:
        print(f"\n[!] State0 not found in state machine, cannot search for paths")
    
    # Print summary
    print("\n[*] Summary:")
    for state_name, state_data in state_machine.items():
        print(f"  {state_name} @ {state_data['address']}")
        for char, info in state_data['transitions'].items():
            print(f"    '{char}' (0x{info['ascii']:02x}) -> {info['next_state']}")

if __name__ == '__main__':
    main()

Ok so this script does the following:

  1. Find all states by looking for the bytes for cmp rax, 0x12ad1659

  2. Find the first rtdsc by going two rtdsc over the address we found in step 1.

  3. Make that the address of UnknownStateN

  4. Disassemble untill we hit the bottom rtdsc

  5. find all cmp byte [arg_3bb84h], <someascii> and write these down as transition characters for this state.

  6. je to transitionN and write down all of these for that state

  7. Count how many states we have

  8. Go to the address of our jump table 0x140c687b8

  9. Parse out all the 4 byte addresses and write these down as StateN (renaming all the UnknownStates we got in Step3)

Now we dump a nice json dict to fsm_out.txt that looks something like:

  "State23518": {
    "address": "0x1405efbbb",
    "transitions": {
      "Z": {
        "ascii": 90,
        "next_state": "0xb7f2"
      }
    },
    "state_id": "0x5bde"
  },
  "State26761": {
    "address": "0x1405efd14",
    "transitions": {
      "H": {
        "ascii": 72,
        "next_state": "0xd154"
      },
      "o": {
        "ascii": 111,
        "next_state": "0xd155"
      }
    },
    "state_id": "0x6889"
  },
  "State43141": {
    "address": "0x1405efea4",
    "transitions": {
      "1": {
        "ascii": 49,
        "next_state": "0x151dd"
      }
    },
    "state_id": "0xa885"
  },...

Now I thought I could try to find the longest path I can reach from State0 and that would likely be my password! I wrote another script that takes fsm_out.txt as an input to find the 100 longest paths:

#!/usr/bin/env python3
"""
FSM Path Finder
Reads fsm_output.txt and finds all paths of a specified length from State0
"""

import json
from collections import deque

def load_state_machine(filename='fsm_output.txt'):
    """Load the state machine from JSON file"""
    with open(filename, 'r') as f:
        return json.load(f)

def build_state_mapping(state_machine):
    """Create mappings between state IDs and state names"""
    # Map from next_state hex value to state name
    state_id_to_name = {}
    
    # First, try to map based on state numbers
    # If a state is named "State42", its ID in transitions should relate to 42
    for state_name, state_data in state_machine.items():
        if state_name.startswith('State') and state_name[5:].isdigit():
            state_num = int(state_name[5:])
            # Try the hex representation of the state number
            state_id_to_name[hex(state_num)] = state_name
    
    # Also check if any transitions point to addresses that match state addresses
    for state_name, state_data in state_machine.items():
        addr = state_data.get('address', '')
        if addr:
            state_id_to_name[addr] = state_name
    
    return state_id_to_name

def build_adjacency_list(state_machine, state_id_to_name):
    """Build adjacency list for the state machine"""
    adjacency = {}
    
    for state_name, state_data in state_machine.items():
        adjacency[state_name] = []
        for char, info in state_data['transitions'].items():
            next_state_hex = info['next_state']
            if next_state_hex in state_id_to_name:
                target_state = state_id_to_name[next_state_hex]
                adjacency[state_name].append((target_state, char))
    
    return adjacency

def find_accepting_paths(adjacency, start_state, max_depth=50):
    """Find all simple paths from start_state to accepting states (no outgoing transitions)"""
    print(f"[*] Finding all accepting paths from {start_state} up to depth {max_depth}...")
    
    complete_paths = []
    
    def dfs(current, path_string, visited):
        if len(path_string) > max_depth:
            return
        
        # If no outgoing transitions, this is an accepting state
        if current not in adjacency or not adjacency[current]:
            complete_paths.append({
                'path': path_string,
                'length': len(path_string),
                'end_state': current
            })
            return
        
        # Explore neighbors (with cycle detection)
        for next_state, char in adjacency[current]:
            if next_state not in visited:
                visited.add(next_state)
                dfs(next_state, path_string + char, visited)
                visited.remove(next_state)
    
    # Start DFS
    visited = set([start_state])
    dfs(start_state, "", visited)
    
    # Sort by length descending
    complete_paths.sort(key=lambda x: x['length'], reverse=True)
    
    print(f"[*] Found {len(complete_paths)} accepting paths")
    return complete_paths

def save_paths_to_file(paths, state_machine, filename='paths_longest.txt'):
    """Save all paths to a file"""
    with open(filename, 'w') as f:
        f.write(f"# Top {len(paths)} longest accepting paths from State0\n")
        f.write(f"# Total accepting paths found: {len(paths)}\n\n")
        
        for i, path_info in enumerate(paths, 1):
            end_state = path_info['end_state']
            address = state_machine.get(end_state, {}).get('address', 'Unknown')
            f.write(f"Path {i}:\n")
            f.write(f"  Length: {path_info['length']}\n")
            f.write(f"  String: {path_info['path']}\n")
            f.write(f"  End State: {end_state}\n")
            f.write(f"  Address: {address}\n\n")
    
    print(f"[*] Saved {len(paths)} paths to {filename}")

def main():
    # Configuration
    input_file = 'fsm_output.txt'
    output_file = 'paths_longest.txt'
    start_state = 'State0'
    max_depth = 50
    num_longest = 100
    
    print(f"[*] Loading state machine from {input_file}")
    state_machine = load_state_machine(input_file)
    print(f"[*] Loaded {len(state_machine)} states")
    
    # Build mappings
    print(f"[*] Building state mappings...")
    state_id_to_name = build_state_mapping(state_machine)
    print(f"[*] Mapped {len(state_id_to_name)} state IDs")
    
    # Build adjacency list
    print(f"[*] Building adjacency list...")
    adjacency = build_adjacency_list(state_machine, state_id_to_name)
    
    # Count total transitions
    total_transitions = sum(len(neighbors) for neighbors in adjacency.values())
    print(f"[*] Found {total_transitions} total transitions")
    
    # Check if start state exists
    if start_state not in state_machine:
        print(f"[!] Error: {start_state} not found in state machine")
        print(f"[*] Available states: {', '.join(sorted(state_machine.keys())[:10])}...")
        return
    
    # Find accepting paths
    all_paths = find_accepting_paths(adjacency, start_state, max_depth=max_depth)
    
    # Take top num_longest longest paths
    paths = all_paths[:num_longest]
    
    if paths:
        save_paths_to_file(paths, state_machine, output_file)
        print(f"\n[*] Top {len(paths)} longest accepting paths from {start_state}")
        
        # Print all paths
        print(f"\n[*] All {len(paths)} paths:")
        for i, path_info in enumerate(paths, 1):
            end_state = path_info['end_state']
            address = state_machine.get(end_state, {}).get('address', 'Unknown')
            print(f"    Path {i}: length {path_info['length']}: {path_info['path']} (End State: {end_state}, Address: {address})")
    else:
        print(f"\n[!] No accepting paths found from {start_state}")
        
        # Debug: Show what states are reachable from start_state
        if start_state in adjacency:
            print(f"\n[*] States reachable from {start_state}:")
            for next_state, char in adjacency[start_state][:10]:
                print(f"    '{char}' -> {next_state}")

if __name__ == '__main__':
    main()

The above outputs:

python3 .\fsm_find_path.py
[*] Loading state machine from fsm_output.txt
[*] Loaded 45291 states
[*] Building state mappings...
[*] Mapped 90581 state IDs
[*] Building adjacency list...
[*] Found 45289 total transitions
[*] Finding all accepting paths from State0 up to depth 50...
[*] Found 22682 accepting paths
[*] Saved 100 paths to paths_longest.txt

[*] Top 100 longest accepting paths from State0

[*] All 100 paths:
    Path 1: length 15: iqg0nSeCHnOMPm2 (End State: State57775, Address: 0x14018c9e0)
    Path 2: length 14: JYCDECU4_EB7FR (End State: State35702, Address: 0x140bc0f41)
    Path 3: length 14: JYCDECU4_EB7Fz (End State: State35703, Address: 0x140134214)
    Path 4: length 14: JYCDECU4_EB7Y2 (End State: State35699, Address: 0x1404ef504)
    Path 5: length 14: JYCDECU4_EB7YY (End State: State35700, Address: 0x1402f82f2)

Going to 0x14018c9e0 we see the fsm dumper failed to find the last transition/state for some reason. We can see in the disassembler that there's only one ascii cmp instruction for Q

PS C:\Users\Someone\Downloads\5_-_ntfsm> .\ntfsm.exe iqg0nSeCHnOMPm2Q
correct!
Your reward: f1n1t3_st4t3_m4ch1n3s_4r3_fun@flare-on.com

One thing this challenge taught me is that having multiple tools in your arsenal is pretty important. Ida was failing and jsdec and ghidra both had some pieces of the puzzle. Not relying on one approach is key!

Challenge 6

This time, we get a 64 bit ELF binary.

We pop it in IDA and see a bunch of pyinstaller related strings. I just threw pyinstextractor ( https://github.com/extremecoders-re/pyinstxtractor ) and got some extracted pyc files and an interesting chat_logs.json . It seems like Part of the chat is using LCG-XOR encryption (and we have the plaintext) and the rest is encrytped with RSA and we don't have the plaintext which we probably need to recover!

I wanted to decompyle the file challenge_to_compile.pyc ;

uncompyle6 and decompyle3 didn't work for me and the same went for python -m dis but after compiling pycdc, I got:

warsang@DESKTOP-4H5U344:/mnt/c/Users/Someone/Downloads/6_-_Chain_of_Demands/chat_client_extracted/pycdc$ ./pycdc ../challenge_to_compile.pyc
# Source Generated with Decompyle++
# File: challenge_to_compile.pyc (Python 3.12)

import tkinter as tk
from tkinter import scrolledtext, messagebox, simpledialog, Checkbutton, BooleanVar, Toplevel
import platform
import hashlib
import time
import json
from threading import Thread
import math
import random
from Crypto.PublicKey import RSA
from Crypto.Util.number import bytes_to_long, long_to_bytes, isPrime
import os
import sys
from web3 import Web3
from eth_account import Account
from eth_account.signers.local import LocalAccount

def resource_path(relative_path):
Unsupported opcode: PUSH_EXC_INFO (105)
    '''
    Get the absolute path to a resource, which works for both development
    and for a PyInstaller-bundled executable.
    '''
    base_path = sys._MEIPASS
    return os.path.join(base_path, relative_path)
# WARNING: Decompyle incomplete


class SmartContracts:
    rpc_url = ''
    private_key = ''

    def deploy_contract(contract_bytes, contract_abi):
Unsupported opcode: PUSH_EXC_INFO (105)
        w3 = Web3(Web3.HTTPProvider(SmartContracts.rpc_url))
        if not w3.is_connected():
            raise ConnectionError(f'''[!] Failed to connect to Ethereum network at {SmartContracts.rpc_url}''')
        print(f'''[+] Connected to Sepolia network at {SmartContracts.rpc_url}''')
        print(f'''[+] Current block number: {w3.eth.block_number}''')
        if not SmartContracts.private_key:
            raise ValueError('Please add your private key.')
        account = Account.from_key(SmartContracts.private_key)
        w3.eth.default_account = account.address
        print(f'''[+] Using account: {account.address}''')
        balance_wei = w3.eth.get_balance(account.address)
        print(f'''[+] Account balance: {w3.from_wei(balance_wei, 'ether')} ETH''')
        if balance_wei == 0:
            print('[!] Warning: Account has 0 ETH. Deployment will likely fail. Get some testnet ETH from a faucet (e.g., sepoliafaucet.com)!')
        Contract = w3.eth.contract(abi = contract_abi, bytecode = contract_bytes)
        gas_estimate = Contract.constructor().estimate_gas()
        print(f'''[+] Estimated gas for deployment: {gas_estimate}''')
        gas_price = w3.eth.gas_price
        print(f'''[+] Current gas price: {w3.from_wei(gas_price, 'gwei')} Gwei''')
        transaction = Contract.constructor().build_transaction({
            'from': account.address,
            'nonce': w3.eth.get_transaction_count(account.address),
            'gas': gas_estimate + 200000,
            'gasPrice': gas_price })
        signed_txn = w3.eth.account.sign_transaction(transaction, private_key = SmartContracts.private_key)
        print('[+]  Deploying contract...')
        tx_hash = w3.eth.send_raw_transaction(signed_txn.raw_transaction)
        print(f'''[+] Deployment transaction sent. Hash: {tx_hash.hex()}''')
        print('[+] Waiting for transaction to be mined...')
        tx_receipt = w3.eth.wait_for_transaction_receipt(tx_hash, timeout = 300)
        print(f'''[+] Transaction receipt: {tx_receipt}''')
        if tx_receipt.status == 0:
            print('[!] Transaction failed (status 0). It was reverted.')
            return None
        contract_address = tx_receipt.contractAddress
        print(f'''[+] Contract deployed at address: {contract_address}''')
        deployed_contract = w3.eth.contract(address = contract_address, abi = contract_abi)
        return deployed_contract
    # WARNING: Decompyle incomplete



class LCGOracle:

    def __init__(self, multiplier, increment, modulus, initial_seed):
        self.multiplier = multiplier
        self.increment = increment
        self.modulus = modulus
        self.state = initial_seed
        self.contract_bytes = '6080604052348015600e575f5ffd5b506102e28061001c5f395ff3fe608060405234801561000f575f5ffd5b5060043610610029575f3560e01c8063115218341461002d575b5f5ffd5b6100476004803603810190610042919061010c565b61005d565b6040516100549190610192565b60405180910390f35b5f5f848061006e5761006d6101ab565b5b86868061007e5761007d6101ab565b5b8987090890505f5f8411610092575f610095565b60015b60ff16905081816100a69190610205565b858260016100b49190610246565b6100be9190610205565b6100c89190610279565b9250505095945050505050565b5f5ffd5b5f819050919050565b6100eb816100d9565b81146100f5575f5ffd5b50565b5f81359050610106816100e2565b92915050565b5f5f5f5f5f60a08688031215610125576101246100d5565b5b5f610132888289016100f8565b9550506020610143888289016100f8565b9450506040610154888289016100f8565b9350506060610165888289016100f8565b9250506080610176888289016100f8565b9150509295509295909350565b61018c816100d9565b82525050565b5f6020820190506101a55f830184610183565b92915050565b7f4e487b71000000000000000000000000000000000000000000000000000000005f52601260045260245ffd5b7f4e487b71000000000000000000000000000000000000000000000000000000005f52601160045260245ffd5b5f61020f826100d9565b915061021a836100d9565b9250828202610228816100d9565b9150828204841483151761023f5761023e6101d8565b5b5092915050565b5f610250826100d9565b915061025b836100d9565b9250828203905081811115610273576102726101d8565b5b92915050565b5f610283826100d9565b915061028e836100d9565b92508282019050808211156102a6576102a56101d8565b5b9291505056fea2646970667358221220c7e885c1633ad951a2d8168f80d36858af279d8b5fe2e19cf79eac15ecb9fdd364736f6c634300081e0033'
        self.contract_abi = [
            {
                'inputs': [
                    {
                        'internalType': 'uint256',
                        'name': 'LCG_MULTIPLIER',
                        'type': 'uint256' },
                    {
                        'internalType': 'uint256',
                        'name': 'LCG_INCREMENT',
                        'type': 'uint256' },
                    {
                        'internalType': 'uint256',
                        'name': 'LCG_MODULUS',
                        'type': 'uint256' },
                    {
                        'internalType': 'uint256',
                        'name': '_currentState',
                        'type': 'uint256' },
                    {
                        'internalType': 'uint256',
                        'name': '_counter',
                        'type': 'uint256' }],
                'name': 'nextVal',
                'outputs': [
                    {
                        'internalType': 'uint256',
                        'name': '',
                        'type': 'uint256' }],
                'stateMutability': 'pure',
                'type': 'function' }]
        self.deployed_contract = None


    def deploy_lcg_contract(self):
        self.deployed_contract = SmartContracts.deploy_contract(self.contract_bytes, self.contract_abi)


    def get_next(self, counter):
        print(f'''\n[+] Calling nextVal() with _currentState={self.state}''')
        self.state = self.deployed_contract.functions.nextVal(self.multiplier, self.increment, self.modulus, self.state, counter).call()
        print(f'''  _counter = {counter}: Result = {self.state}''')
        return self.state



class TripleXOROracle:

    def __init__(self):
        self.contract_bytes = '61030f61004d600b8282823980515f1a6073146041577f4e487b71000000000000000000000000000000000000000000000000000000005f525f60045260245ffd5b305f52607381538281f3fe7300000000000000000000000000000000000000003014608060405260043610610034575f3560e01c80636230075614610038575b5f5ffd5b610052600480360381019061004d919061023c565b610068565b60405161005f91906102c0565b60405180910390f35b5f5f845f1b90505f845f1b90505f61007f85610092565b9050818382181893505050509392505050565b5f5f8290506020815111156100ae5780515f525f5191506100b6565b602081015191505b50919050565b5f604051905090565b5f5ffd5b5f5ffd5b5f819050919050565b6100df816100cd565b81146100e9575f5ffd5b50565b5f813590506100fa816100d6565b92915050565b5f5ffd5b5f5ffd5b5f601f19601f8301169050919050565b7f4e487b71000000000000000000000000000000000000000000000000000000005f52604160045260245ffd5b61014e82610108565b810181811067ffffffffffffffff8211171561016d5761016c610118565b5b80604052505050565b5f61017f6100bc565b905061018b8282610145565b919050565b5f67ffffffffffffffff8211156101aa576101a9610118565b5b6101b382610108565b9050602081019050919050565b828183375f83830152505050565b5f6101e06101db84610190565b610176565b9050828152602081018484840111156101fc576101fb610104565b5b6102078482856101c0565b509392505050565b5f82601f83011261022357610222610100565b5b81356102338482602086016101ce565b91505092915050565b5f5f5f60608486031215610253576102526100c5565b5b5f610260868287016100ec565b9350506020610271868287016100ec565b925050604084013567ffffffffffffffff811115610292576102916100c9565b5b61029e8682870161020f565b9150509250925092565b5f819050919050565b6102ba816102a8565b82525050565b5f6020820190506102d35f8301846102b1565b9291505056fea26469706673582212203fc7e6cc4bf6a86689f458c2d70c565e7c776de95b401008e58ca499ace9ecb864736f6c634300081e0033'
        self.contract_abi = [
            {
                'inputs': [
                    {
                        'internalType': 'uint256',
                        'name': '_primeFromLcg',
                        'type': 'uint256' },
                    {
                        'internalType': 'uint256',
                        'name': '_conversationTime',
                        'type': 'uint256' },
                    {
                        'internalType': 'string',
                        'name': '_plaintext',
                        'type': 'string' }],
                'name': 'encrypt',
                'outputs': [
                    {
                        'internalType': 'bytes32',
                        'name': '',
                        'type': 'bytes32' }],
                'stateMutability': 'pure',
                'type': 'function' }]
        self.deployed_contract = None


    def deploy_triple_xor_contract(self):
        self.deployed_contract = SmartContracts.deploy_contract(self.contract_bytes, self.contract_abi)


    def encrypt(self, prime_from_lcg, conversation_time, plaintext_bytes):
        print(f'''\n[+] Calling encrypt() with prime_from_lcg={prime_from_lcg}, time={conversation_time}, plaintext={plaintext_bytes}''')
        ciphertext = self.deployed_contract.functions.encrypt(prime_from_lcg, conversation_time, plaintext_bytes).call()
        print(f'''  _ciphertext = {ciphertext.hex()}''')
        return ciphertext



class ChatLogic:

    def __init__(self):
        self.lcg_oracle = None
        self.xor_oracle = None
        self.rsa_key = None
        self.seed_hash = None
        self.super_safe_mode = False
        self.message_count = 0
        self.conversation_start_time = 0
        self.chat_history = []
        self._initialize_crypto_backend()


    def _get_system_artifact_hash(self):
        artifact = platform.node().encode('utf-8')
        hash_val = hashlib.sha256(artifact).digest()
        seed_hash = int.from_bytes(hash_val, 'little')
        print(f'''[SETUP]  - Generated Seed {seed_hash}...''')
        return seed_hash


    def _generate_primes_from_hash(self, seed_hash):
        primes = []
        current_hash_byte_length = (seed_hash.bit_length() + 7) // 8
        current_hash = seed_hash.to_bytes(current_hash_byte_length, 'little')
        print('[SETUP] Generating LCG parameters from system artifact...')
        iteration_limit = 10000
        iterations = 0
        if len(primes) < 3 and iterations < iteration_limit:
            current_hash = hashlib.sha256(current_hash).digest()
            candidate = int.from_bytes(current_hash, 'little')
            iterations += 1
            if candidate.bit_length() == 256 and isPrime(candidate):
                primes.append(candidate)
                print(f'''[SETUP]  - Found parameter {len(primes)}: {str(candidate)[:20]}...''')
            if len(primes) < 3 and iterations < iteration_limit:
                continue
        if len(primes) < 3:
            error_msg = '[!] Error: Could not find 3 primes within iteration limit.'
            print('Current Primes: ', primes)
            print(error_msg)
            exit()
        return (primes[0], primes[1], primes[2])


    def _initialize_crypto_backend(self):
        self.seed_hash = self._get_system_artifact_hash()
        (m, c, n) = self._generate_primes_from_hash(self.seed_hash)
        self.lcg_oracle = LCGOracle(m, c, n, self.seed_hash)
        self.lcg_oracle.deploy_lcg_contract()
        print('[SETUP] LCG Oracle is on-chain...')
        self.xor_oracle = TripleXOROracle()
        self.xor_oracle.deploy_triple_xor_contract()
        print('[SETUP] Triple XOR Oracle is on-chain...')
        print('[SETUP] Crypto backend initialized...')


    def generate_rsa_key_from_lcg(self):
Unsupported opcode: BEFORE_WITH (108)
        print('[RSA] Generating RSA key from on-chain LCG primes...')
        lcg_for_rsa = LCGOracle(self.lcg_oracle.multiplier, self.lcg_oracle.increment, self.lcg_oracle.modulus, self.seed_hash)
        lcg_for_rsa.deploy_lcg_contract()
        primes_arr = []
        rsa_msg_count = 0
        iteration_limit = 10000
        iterations = 0
        if len(primes_arr) < 8 and iterations < iteration_limit:
            candidate = lcg_for_rsa.get_next(rsa_msg_count)
            rsa_msg_count += 1
            iterations += 1
            if candidate.bit_length() == 256 and isPrime(candidate):
                primes_arr.append(candidate)
                print(f'''[RSA]  - Found 256-bit prime #{len(primes_arr)}''')
            if len(primes_arr) < 8 and iterations < iteration_limit:
                continue
        print('Primes Array: ', primes_arr)
        if len(primes_arr) < 8:
            error_msg = '[RSA] Error: Could not find 8 primes within iteration limit.'
            print('Current Primes: ', primes_arr)
            print(error_msg)
            return error_msg
        n = None
        for p_val in primes_arr:
            n *= p_val
        phi = 1
        for p_val in primes_arr:
            phi *= p_val - 1
        e = 65537
        if math.gcd(e, phi) != 1:
            error_msg = '[RSA] Error: Public exponent e is not coprime with phi(n). Cannot generate key.'
            print(error_msg)
            return error_msg
        self.rsa_key = None.construct((n, e))
    # WARNING: Decompyle incomplete


    def process_message(self, plaintext):
        if self.conversation_start_time == 0:
            self.conversation_start_time = time.time()
        conversation_time = int(time.time() - self.conversation_start_time)
        if self.super_safe_mode and self.rsa_key:
            plaintext_bytes = plaintext.encode('utf-8')
            plaintext_enc = bytes_to_long(plaintext_bytes)
            _enc = pow(plaintext_enc, self.rsa_key.e, self.rsa_key.n)
            ciphertext = _enc.to_bytes(self.rsa_key.n.bit_length(), 'little').rstrip(b'\x00')
            encryption_mode = 'RSA'
            plaintext = '[ENCRYPTED]'
        else:
            prime_from_lcg = self.lcg_oracle.get_next(self.message_count)
            ciphertext = self.xor_oracle.encrypt(prime_from_lcg, conversation_time, plaintext)
            encryption_mode = 'LCG-XOR'
        log_entry = {
            'conversation_time': conversation_time,
            'mode': encryption_mode,
            'plaintext': plaintext,
            'ciphertext': ciphertext.hex() }
        self.chat_history.append(log_entry)
        self.save_chat_log()
        return (f'''[{conversation_time}s] {plaintext}''', f'''[{conversation_time}s] {ciphertext.hex()}''')


    def save_chat_log(self):
Unsupported opcode: BEFORE_WITH (108)
        pass
    # WARNING: Decompyle incomplete



class ChatApp(tk.Tk):
Unsupported opcode: MAKE_CELL (225)
    pass
# WARNING: Decompyle incomplete

if __name__ == '__main__':
    app = ChatApp()
    app.mainloop()
    return None

Note the two byte blobs which are EVM bytecode.

We run a script on chat_log.json to recover the keystream:

import json
from binascii import unhexlify, hexlify

chat_log = json.load(open('chat_log.json','r')

def recover_keystream(entry):
    c = unhexlify(entry["ciphertext"])
    p = entry["plaintext"].encode("utf-8")
    ks = bytearray(len(c))
    for i in range(len(c)):
        pi = p[i] if i < len(p) else 0
        ks[i] = c[i] ^ pi
    return bytes(ks)

for e in [x for x in chat_log if x["mode"] == "LCG-XOR"]:
    ks = recover_keystream(e)
    print(f'conversation_time={e["conversation_time"]}, plaintext={e["plaintext"]!r}')
    print(hexlify(ks).decode())
    print()

When run, we get:

PS C:\Users\Someone\Downloads\6_-_Chain_of_Demands\chat_client_extracted> python3 .\recoverkeystream.py
conversation_time=0, plaintext='Hello'
a151de1d76f12318fe16e8cd1c1678fd3b0a752eca163a7261a7e2510184bbe9

conversation_time=4, plaintext='How are you?'
6dd058f178f1f7d4ea32bff17d747c1e0715865b21358418e67f94163513eae4

conversation_time=11, plaintext='Terrible...'
9d977c2708ce9d171e72d6f04c13e643c988aa5ab29b5499c93df112687c8c7c

conversation_time=13, plaintext='Is this a secure channel?'
73cae987e626055a7291560caeca61be4bd8fbff22e4324440b0c9def0288e46

conversation_time=16, plaintext="Yes, it's on the blockchain."
660893ee26544aa9f477586a4f9b6ff853aef774192018fcbb444649493f6fc5

conversation_time=24, plaintext='Erm enable super safe mode'
3d0e9be0db57abc0771c288fa4ceaf1a23681c77762068d552795d361b106b6d

conversation_time=30, plaintext='Ok, activating now'
2c419a382877723c968f1bf9c5679817ccd4da241d4b50bab99f74f169d456db

Couple of interesting things we can gather from the python we got earlier:

  1. The RSA key is weak because it's composed of 8 × 256-bit primes (total ~2048 bits)

  2. The primes come from a predictable LCG (Linear Congruential Generator) from which we just recovered the key stream

  3. We probably need to factor the RSA modulus to get the private key and decrypt the messages

I also disassembled both contracts at https://ethervm.io/decompile but didn't find the output super helpfull;

One thing I thought of doing before going down the factorization approach was try to find the hostname as that's used to init the LCG. After some extensive grep + strings, I couldn't find anything. For the RSA factorization, I used [factordb](https://factordb.com/index.php?query=966937097264573110291784941768218419842912477944108020986104301819288091060794069566383434848927824136504758249488793818136949609024508201274193993592647664605167873625565993538947116786672017490835007254958179800254950175363547964901595712823487867396044588955498965634987478506533221719372965647518750091013794771623552680465087840964283333991984752785689973571490428494964532158115459786807928334870321963119069917206505787030170514779392407953156221948773236670005656855810322260623193397479565769347040107022055166737425082196480805591909580137453890567586730244300524109754079060045173072482324926779581706647) to factorize RSA's n.

After a bit of drafting python, experimenting and some llm back and forth, I got:

#!/usr/bin/env python3


import json
from binascii import unhexlify, hexlify
from Crypto.PublicKey import RSA

# Your factors from factordb
factors = [
    62826068095404038148338678434404643116583820572865189787368764098892510936793,
    68446593057460676025047989394445774862028837156496043637575024036696645401289,
    69802783227378026511719332106789335301376047817734407431543841272855455052067,
    72967016216206426977511399018380411256993151454761051136963936354667101207529,
    75395288067150543091997907493708187002382230701390674177789205231462589994993,
    79611551309049018061300429096903741339200167241148430095608259960783012192237,
    82836473202091099900869551647600727408082364801577205107017971703263472445197,
    88790251731800173019114073860734130032527125661685690883849562991870715928701
]

# Load the public key
with open('public.pem', 'r') as f:
    pub_key = RSA.import_key(f.read())

n = pub_key.n
e = pub_key.e

print("=== RSA Key Info ===")
print(f"n bit length: {n.bit_length()} bits")
print(f"e = {e}")

# Verify factors
product = 1
for f in factors:
    product *= f

assert product == n, "Factors don't match!"
print("✓ Factors verified\n")

# Compute phi(n)
phi = 1
for p in factors:
    phi *= (p - 1)

# Compute private exponent
d = pow(e, -1, phi)

print("=== Understanding the Bug ===")
print(f"n.bit_length() = {n.bit_length()}")
print(f"Proper byte length would be: {(n.bit_length() + 7) // 8}")
print(f"But code uses: _enc.to_bytes(n.bit_length(), 'little')")
print(f"This creates a buffer of {n.bit_length()} BYTES (not bits)!")
print()

# Load chat log
with open('chat_log.json', 'r') as f:
    chat_log = json.load(f)

print("=== Decrypting RSA Messages ===\n")

for entry in chat_log:
    if entry["mode"] == "RSA":
        ct_hex = entry["ciphertext"]
        ct_bytes = unhexlify(ct_hex)
        
        print(f"Time {entry['conversation_time']}:")
        print(f"  Ciphertext: {ct_hex[:64]}...")
        print(f"  Ciphertext length: {len(ct_bytes)} bytes")
        
        # The ciphertext is little-endian
        ct_int = int.from_bytes(ct_bytes, 'little')
        
        # Decrypt
        pt_int = pow(ct_int, d, n)
        print(f"  Decrypted integer: {pt_int}")
        print(f"  Decrypted int bit length: {pt_int.bit_length()}")
        
        # Now we need to reverse the buggy encoding
        # Original encoding: plaintext_bytes -> bytes_to_long -> RSA encrypt -> 
        #                    to_bytes(n.bit_length(), 'little').rstrip(b'\x00')
        
        # So to decrypt, we reverse it:
        # The plaintext integer needs to be converted back to bytes
        
        # Try to convert using little-endian
        try:
            # Convert to bytes - use enough bytes to hold the value
            pt_bytes = pt_int.to_bytes((pt_int.bit_length() + 7) // 8, 'little')
            message = pt_bytes.decode('utf-8', errors='replace').rstrip('\x00')
            print(f"  Plaintext (attempt 1): {message}")
        except Exception as ex:
            print(f"  Error in decoding: {ex}")
        
        # Try big-endian too
        try:
            pt_bytes = pt_int.to_bytes((pt_int.bit_length() + 7) // 8, 'big')
            message = pt_bytes.decode('utf-8', errors='replace').rstrip('\x00')
            print(f"  Plaintext (attempt 2 - big endian): {message}")
        except Exception as ex:
            print(f"  Error in decoding: {ex}")
        
        # The encryption process was:
        # 1. plaintext.encode('utf-8') -> plaintext_bytes
        # 2. bytes_to_long(plaintext_bytes) -> plaintext_enc (BIG ENDIAN by default)
        # 3. pow(plaintext_enc, e, n) -> _enc
        # 4. _enc.to_bytes(n.bit_length(), 'little') -> ciphertext (LITTLE ENDIAN)
        
        # So decryption should be:
        # 1. int.from_bytes(ciphertext, 'little') -> ct_int 
        # 2. pow(ct_int, d, n) -> pt_int 
        # 3. long_to_bytes(pt_int) -> plaintext_bytes (BIG ENDIAN)
        
        from Crypto.Util.number import long_to_bytes
        try:
            pt_bytes = long_to_bytes(pt_int)
            message = pt_bytes.decode('utf-8', errors='replace')
            print(f"  Plaintext (attempt 3 - using long_to_bytes): {message}")
            print()
        except Exception as ex:
            print(f"  Error: {ex}")
            print()

print("\n" + "="*70)
print("FINAL RESULTS:")
print("="*70 + "\n")

# Try the most likely correct interpretation
for entry in chat_log:
    if entry["mode"] == "RSA":
        ct_bytes = unhexlify(entry["ciphertext"])
        ct_int = int.from_bytes(ct_bytes, 'little')
        pt_int = pow(ct_int, d, n)
        
        from Crypto.Util.number import long_to_bytes
        pt_bytes = long_to_bytes(pt_int)
        
        try:
            message = pt_bytes.decode('utf-8', errors='replace')
            print(f"[Time {entry['conversation_time']}] {message}")
        except:
            print(f"[Time {entry['conversation_time']}] (binary: {hexlify(pt_bytes).decode()[:100]}...)")

Running the above, we get:

python3 factormodulus.py
=== RSA Key Info ===
n bit length: 2043 bits
e = 65537
✓ Factors verified

=== Understanding the Bug ===
n.bit_length() = 2043
Proper byte length would be: 256
But code uses: _enc.to_bytes(n.bit_length(), 'little')
This creates a buffer of 2043 BYTES (not bits)!

=== Decrypting RSA Messages ===

Time 242:
  Ciphertext: 680a65364a498aa87cf17c934ab308b2aee0014aee5b0b7d289b5108677c7ad1...
  Ciphertext length: 256 bytes
  Decrypted integer: 26899266944815515925666568004260411828298976650656839231590722623
  Decrypted int bit length: 215
  Plaintext (attempt 1): ?liame ruoy s'tahw yllautcA
  Plaintext (attempt 2 - big endian): Actually what's your email?
  Plaintext (attempt 3 - using long_to_bytes): Actually what's your email?

Time 249:
  Ciphertext: 6f70034472ce115fc82a08560bd22f0e7f373e6ef27bca6e4c8f67fedf4031be...
  Ciphertext length: 256 bytes
  Decrypted integer: 1980308561205863721106473263805640979245120929520793020933813200711533
  Decrypted int bit length: 231
  Plaintext (attempt 1): moc.no-eralf@8rG_5i_3b3W s'tI
  Plaintext (attempt 2 - big endian): It's W3b3_i5_Gr8@flare-on.com
  Plaintext (attempt 3 - using long_to_bytes): It's W3b3_i5_Gr8@flare-on.com


======================================================================
FINAL RESULTS:
======================================================================

[Time 242] Actually what's your email?
[Time 249] It's W3b3_i5_Gr8@flare-on.com

Challenge 7 - NOT SOLVED!

Disclaimer I never finished challenge 7 from lack of time; below are a couple of my notes I'll leave here just to illustratre the thought process. I went a bit further with Windbg TTD and got a good handle of where the decryption was happening for the first packet but never got around to writing a decryptor for the packets and solving the challenge.

We get a cpp x64 binary and a pcap http capture. Following the http stream we get:

GET /good HTTP/1.1
User-Agent: Mozilla/5.0 (Avocado OS; 1-Core Toaster) AppleWebKit/537.36 (XML, like Gecko) FLARE/1.0
Authorization: Bearer e4b8058f06f7061e8f0f8ed15d23865ba2427b23a695d9b27bc308a26d
Accept-Encoding: 
Connection: close
Accept: */*
Host: twelve.flare-on.com:8000

HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/3.10.11
Date: Wed, 20 Aug 2025 06:12:07 GMT
Content-type: application/json

{"d": "085d8ea282da6cf76bb2765bc3b26549a1f6bdf08d8da2a62e05ad96ea645c685da48d66ed505e2e28b968d15dabed15ab1500901eb9da4606468650f72550483f1e8c58ca13136bb8028f976bedd36757f705ea5f74ace7bd8af941746b961c45bcac1eaf589773cecf6f1c620e0e37ac1dfc9611aa8ae6e6714bb79a186f47896f18203eddce97f496b71a630779b136d7bf0c82d560"}

First interesting thing when opening the binary in ida was being greeted with:

Looking at strings we see rustc-hyper which is an http rust library; huh so maybe this isn't a Cpp binary even though detect it easy was pointing to msvc as the compiler. I had no idea but it seems like you can compile rust with msvc? In retrsopect, this was a red herring and we were dealing with Cpp

When we run this in x64dbg; we notice we hit a TLS Callback; in IDA:

.text:000000014044C1A4 ; =============== S U B R O U T I N E =======================================
.text:000000014044C1A4
.text:000000014044C1A4 ; Attributes: library function
.text:000000014044C1A4
.text:000000014044C1A4 ; void __fastcall mainTLScallback(__int64, int)
.text:000000014044C1A4                 public mainTLScallback
.text:000000014044C1A4 mainTLScallback proc near               ; DATA XREF: .rdata:TlsCallbacks↓o
.text:000000014044C1A4                                         ; .rdata:off_1404687B0↓o ...
.text:000000014044C1A4
.text:000000014044C1A4 arg_0           = qword ptr  8
.text:000000014044C1A4
.text:000000014044C1A4 ; __unwind { // __CxxFrameHandler4
.text:000000014044C1A4                 cmp     edx, 2
.text:000000014044C1A7                 jnz     short locret_14044C209
.text:000000014044C1A9                 mov     [rsp+arg_0], rbx
.text:000000014044C1AE                 push    rdi
.text:000000014044C1AF                 sub     rsp, 20h
.text:000000014044C1B3                 mov     ecx, cs:TlsIndex
.text:000000014044C1B9                 mov     rax, gs:58h
.text:000000014044C1C2                 mov     r8d, 10h
.text:000000014044C1C8                 mov     rdx, [rax+rcx*8]
.text:000000014044C1CC                 cmp     byte ptr [rdx+r8], 1
.text:000000014044C1D1                 jz      short loc_14044C1FF
.text:000000014044C1D3                 mov     byte ptr [rdx+r8], 1
.text:000000014044C1D8                 lea     rbx, unk_1404686E8
.text:000000014044C1DF                 lea     rdi, unk_1404686E8
.text:000000014044C1E6                 jmp     short loc_14044C1FA
.text:000000014044C1E8 ; ---------------------------------------------------------------------------
.text:000000014044C1E8
.text:000000014044C1E8 loc_14044C1E8:                          ; CODE XREF: mainTLScallback+59↓j
.text:000000014044C1E8                 mov     rax, [rbx]
.text:000000014044C1EB                 test    rax, rax
.text:000000014044C1EE                 jz      short loc_14044C1F6
.text:000000014044C1F0                 call    cs:__guard_dispatch_icall_fptr
.text:000000014044C1F6
.text:000000014044C1F6 loc_14044C1F6:                          ; CODE XREF: mainTLScallback+4A↑j
.text:000000014044C1F6                 add     rbx, 8
.text:000000014044C1FA
.text:000000014044C1FA loc_14044C1FA:                          ; CODE XREF: mainTLScallback+42↑j
.text:000000014044C1FA                 cmp     rbx, rdi
.text:000000014044C1FD                 jnz     short loc_14044C1E8
.text:000000014044C1FF
.text:000000014044C1FF loc_14044C1FF:                          ; CODE XREF: mainTLScallback+2D↑j
.text:000000014044C1FF                 mov     rbx, [rsp+28h+arg_0]
.text:000000014044C204                 add     rsp, 20h
.text:000000014044C208                 pop     rdi
.text:000000014044C209
.text:000000014044C209 locret_14044C209:                       ; CODE XREF: mainTLScallback+3↑j
.text:000000014044C209                 retn
.text:000000014044C209 ; } // starts at 14044C1A4
.text:000000014044C209 mainTLScallback endp
.text:000000014044C209

The file in VT is interesting https://www.virustotal.com/gui/file/14e60fb48803c06762b4fffdd4e0a2bd2bcac7ae81c92d7393f1198951dbfbbb/behavior we see it drops a .cab with a broken bmp called doublesuns.bmp ( https://www.virustotal.com/gui/file/ea71829a0c7072e4bdda5df1bd1ee044b916cf9dfaf04469a962af7027d339f8/content )

We can trace from that main TLS callback like so:

mainTLS callback -> TLSCallback1 -> Safe Exception Handler -> Main obfuscated function

I ended up using a TTD trace in IDA. One tip that helped was using the new IDA9 shortcuts; I set a shortcut for backwards step into and backwards step over. Then traced from ws2_32.dll send function's buffer to see how the Bearer token was made. With no deobfuscation, this took a while... Finally, after A LONG time tracing this buffer, I found a call that modified it into the bearer token. Decrypted, it looked like:

000001EBC2512DC0  40 00 25 00 53 00 79 00  73 00 74 00 65 00 6D 00  @.%.S.y.s.t.e.m.
000001EBC2512DD0  52 00 6F 00 6F 00 74 00  25 00 5C 00 73 00 79 00  R.o.o.t.%.\.s.y.
000001EBC2512DE0  73 00 74 00 65 00 6D 00  33 00 32 00 5C 00 6E 00  s.t.e.m.3.2.\.n.
000001EBC2512DF0  6C 00 61 00 73 00 76 00  63 00 2E 00 64 00 6C 00  l.a.s.v.c...d.l.
000001EBC2512E00  6C 00 2C 00 2D 00 31 00  30 00 30 00 30 00 00 00  l.,.-.1.0.0.0...
000001EBC2512E10  00 00 00 00 00 00 00 00  F7 95 8E 02 00 1A 00 80  ................

After going through the function, we get:

000001EBC2512DC0  65 34 62 38 30 35 38 66  64 30 64 30 32 63 33 66  e4b8058fd0d02c3f
000001EBC2512DD0  63 61 30 32 39 62 66 38  38 36 36 35 38 36 38 36  ca029bf886658686
000001EBC2512DE0  36 63 34 32 32 65 36 36  65 39 33 64 63 32 66 61  6c422e66e93dc2fa
000001EBC2512DF0  33 32 61 63 66 32 63 33  66 30 66 61 39 37 65 36  32acf2c3f0fa97e6
000001EBC2512E00  37 33 00 00 2D 00 31 00  30 00 30 00 30 00 00 00  73..-.1.0.0.0...
000001EBC2512E10  00 00 00 00 00 00 00 00  F7 95 8E 02 00 1A 00 80  ................

That's probably why we got a different bearer from the one in the pcap. We have a different path?

Last updated