{"author":"Stefan Eissing","author_email":"stefan@eissing.org","author_time":1739205611,"commit_time":1740063198,"committer":"Daniel Stenberg","committer_email":"daniel@haxx.se","hash":"f78700814d702c523a7a2c3766a1ca60f3db7edc","message":"client writer: handle pause before deocding\n\nAdds a \"cw-pause\" client writer in the PROTOCOL phase that buffers\noutput when the client paused the transfer. This prevents content\ndecoding from blowing the buffer in the \"cw-out\" writer.\n\nAdded test_02_35 that downloads 2 100MB gzip bombs in parallel and\npauses after 1MB of decoded 0's.\n\nThis is a solution to issue #16280, with some limitations:\n- cw-out still needs buffering of its own, since it can be paused\n  \"in the middle\" of a write that started with some KB of gzipped\n  zeros and exploded into several MB of calls to cw-out.\n- cw-pause will then start buffering on its own *after* the write\n  that caused the pause. cw-pause has no buffer limits, but the\n  data it buffers is still content-encoded.\n  Protocols like http/1.1 stop receiving, h2/h3 have window sizes,\n  so the cw-pause buffer should not grow out of control, at least\n  for these protocols.\n- the current limit on cw-out's buffer is ~75MB (for whatever\n  historical reason). A potential content-encoding that blows 16KB\n  (the common h2 chunk size) into > 75MB would still blow the buffer,\n  making the transfer fail. A gzip of 0's makes 16KB into ~16MB, so\n  that still works.\n\nA better solution would be to allow CURLE_AGAIN handling in the client\nwriter chain and make all content encoders handle that. This would stop\nexplosion of encoding on a pause right away. But this is a large change\nof the deocoder operations.\n\nReported-by: lf- on github\nFixes #16280\nCloses #16296\n","parents":["279a4772ae67dd4d9770e11e60040f9113b1c345"],"tree_hash":"23b1d89eccb6980c205de25405d7f9077943977b"}